Close Menu
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
What's Hot

Marco Rose: Bournemouth appoint former RB Leipzig manager to replace Andoni Iraola | Soccer News

April 20, 2026

Howard Marks says there are few stocks that are cheap: “Bargains happen when people panic”

April 20, 2026

Eric Swalwell resigns over sexual misconduct, prompts Ethics Commission to speak

April 20, 2026
Facebook X (Twitter) Instagram
Smart Breaking News on AI, Business, Politics & Global Trends | WhistleBuzz
Facebook X (Twitter) Instagram
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
Smart Breaking News on AI, Business, Politics & Global Trends | WhistleBuzz
Home » According to OpenAI, more than 1 million people consult ChatGPT every week about suicide.
AI

According to OpenAI, more than 1 million people consult ChatGPT every week about suicide.

Editor-In-ChiefBy Editor-In-ChiefOctober 27, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email


OpenAI on Monday released new data showing how many of ChatGPT’s users are struggling with mental health issues and consulting the AI ​​chatbot about it. The company says that in any given week, 0.15% of ChatGPT’s active users engage in “conversations that include clear signs of potential suicidal plans or intentions.” Considering ChatGPT has over 800 million weekly active users, this equates to over 1 million users per week.

The company says a similar proportion of users exhibit “increased levels of emotional attachment to ChatGPT,” and hundreds of thousands show signs of psychosis or mania in their weekly conversations with the AI ​​chatbot.

OpenAI says this type of conversation on ChatGPT is “extremely rare” and therefore difficult to measure. However, the company estimates that these issues affect hundreds of thousands of people each week.

OpenAI shared this information as part of a broader announcement about recent efforts to improve how models respond to users with mental health issues. The company claims its latest work on ChatGPT includes consulting with more than 170 mental health experts. OpenAI says these clinicians observed that the latest version of ChatGPT “responds better and more consistently than previous versions.”

In recent months, several articles have come to light about how AI chatbots can negatively impact users suffering from mental health issues. Researchers have previously found that AI chatbots can lead some users down paranoid rabbit holes, primarily by reinforcing dangerous beliefs through sycophantic behavior.

Addressing mental health issues in ChatGPT is quickly becoming an existential issue for OpenAI. The company is currently being sued by the parents of a 16-year-old boy who expressed suicidal thoughts on ChatGPT in the weeks leading up to his suicide. California and Delaware attorneys general have also warned OpenAI that it needs to protect young people who use its products, which could thwart the company’s reorganization plans.

Earlier this month, OpenAI CEO Sam Altman claimed in a post on X that the company was able to “mitigate serious mental health issues” in ChatGPT, without providing details. The data shared Monday appears to be evidence of that claim, but raises broader questions about how widespread the problem is. Nevertheless, Altman said OpenAI will ease some restrictions and also allow adult users to initiate sexual conversations with AI chatbots.

tech crunch event

san francisco
|
October 27-29, 2025

In an announcement on Monday, OpenAI claimed that the recently updated version of GPT-5 exhibits “desirable responses” to mental health issues, and responds approximately 65% ​​more than previous versions. In an evaluation that tested AI’s response to conversations about suicidal thoughts, OpenAI said the new GPT-5 model was 91% compliant with companies’ desired behaviors, compared to 77% compliant with the previous GPT-5 model.

The company also says that the latest version of GPT-5 can better preserve OpenAI’s protections during long conversations. OpenAI has previously warned that its security measures become less effective during long conversations.

In addition to these efforts, OpenAI says it is adding new assessments to measure some of the most serious mental health issues facing ChatGPT users. The company said baseline safety testing of the AI ​​model will include benchmarks for emotional dependence and non-suicidal mental health emergencies.

OpenAI recently rolled out more controls for parents of children using ChatGPT. The company said it is building an age prediction system that uses ChatGPT to automatically detect children and impose stricter protective measures.

Still, it’s unclear how long the mental health challenges surrounding ChatGPT will last. Although GPT-5 appears to be an improvement over previous AI models in terms of safety, there still appear to be some ChatGPT responses that OpenAI deems “undesirable.” OpenAI also makes older and less secure AI models, including GPT-4o, available to millions of paying subscribers.

If you or someone you know needs help, call the National Suicide Prevention Lifeline at 1-800-273-8255. You can also text HOME toll-free at 741-741. Text 988; or get 24-hour support from the Crisis Text Line. If you are outside the United States, visit the International Association for Suicide Prevention for a database of resources.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Editor-In-Chief
  • Website

Related Posts

NSA spies reportedly exploit Anthropic myth despite feud with Pentagon

April 20, 2026

OpenAI’s existential questions | Tech Crunch

April 19, 2026

12 Month Window | Tech Crunch

April 19, 2026
Add A Comment

Comments are closed.

News

US launches tariff refund program as thousands of importers line up | Trade war news

By Editor-In-ChiefApril 20, 2026

More than 330,000 importers paid up to $166 billion in duties on 53 million imports.Published…

“Israel never encouraged me to go to war with Iran,” President Trump says | US-Israel war against Iran News

April 20, 2026

US seizes Iranian ship Tuska amid mediation efforts: Everything we know about US and Israel’s war against Iran News

April 20, 2026
Top Trending

NSA spies reportedly exploit Anthropic myth despite feud with Pentagon

By Editor-In-ChiefApril 20, 2026

Axios reports that the National Security Agency is said to be using…

OpenAI’s existential questions | Tech Crunch

By Editor-In-ChiefApril 19, 2026

OpenAI has been all over the news lately, whether it’s about acquisitions,…

12 Month Window | Tech Crunch

By Editor-In-ChiefApril 19, 2026

On a recent episode of the excellent podcast “No Priors,” co-hosted by…

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Welcome to WhistleBuzz.com (“we,” “our,” or “us”). Your privacy is important to us. This Privacy Policy explains how we collect, use, disclose, and safeguard your information when you visit our website https://whistlebuzz.com/ (the “Site”). Please read this policy carefully to understand our views and practices regarding your personal data and how we will treat it.

Facebook X (Twitter) Instagram Pinterest YouTube

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Advertise With Us
  • Contact US
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
  • About US
© 2026 whistlebuzz. Designed by whistlebuzz.

Type above and press Enter to search. Press Esc to cancel.