Close Menu
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
What's Hot

MSF ordered to leave Libya ‘without reason’, medical organization says | News

October 29, 2025

Ukraine announces assassination of Russian officer Veniamin Mazherin in Siberia

October 29, 2025

Arsenal 2-0 Brighton

October 29, 2025
Facebook X (Twitter) Instagram
WhistleBuzz – Smart News on AI, Business, Politics & Global Trends
Facebook X (Twitter) Instagram
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
WhistleBuzz – Smart News on AI, Business, Politics & Global Trends
Home » According to OpenAI, more than 1 million people consult ChatGPT every week about suicide.
AI

According to OpenAI, more than 1 million people consult ChatGPT every week about suicide.

whistle_949By whistle_949October 27, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email


OpenAI on Monday released new data showing how many of ChatGPT’s users are struggling with mental health issues and consulting the AI ​​chatbot about it. The company says that in any given week, 0.15% of ChatGPT’s active users engage in “conversations that include clear signs of potential suicidal plans or intentions.” Considering ChatGPT has over 800 million weekly active users, this equates to over 1 million users per week.

The company says a similar proportion of users exhibit “increased levels of emotional attachment to ChatGPT,” and hundreds of thousands show signs of psychosis or mania in their weekly conversations with the AI ​​chatbot.

OpenAI says this type of conversation on ChatGPT is “extremely rare” and therefore difficult to measure. However, the company estimates that these issues affect hundreds of thousands of people each week.

OpenAI shared this information as part of a broader announcement about recent efforts to improve how models respond to users with mental health issues. The company claims its latest work on ChatGPT includes consulting with more than 170 mental health experts. OpenAI says these clinicians observed that the latest version of ChatGPT “responds better and more consistently than previous versions.”

In recent months, several articles have come to light about how AI chatbots can negatively impact users suffering from mental health issues. Researchers have previously found that AI chatbots can lead some users down paranoid rabbit holes, primarily by reinforcing dangerous beliefs through sycophantic behavior.

Addressing mental health issues in ChatGPT is quickly becoming an existential issue for OpenAI. The company is currently being sued by the parents of a 16-year-old boy who expressed suicidal thoughts on ChatGPT in the weeks leading up to his suicide. California and Delaware attorneys general have also warned OpenAI that it needs to protect young people who use its products, which could thwart the company’s reorganization plans.

Earlier this month, OpenAI CEO Sam Altman claimed in a post on X that the company was able to “mitigate serious mental health issues” in ChatGPT, without providing details. The data shared Monday appears to be evidence of that claim, but raises broader questions about how widespread the problem is. Nevertheless, Altman said OpenAI will ease some restrictions and also allow adult users to initiate sexual conversations with AI chatbots.

tech crunch event

san francisco
|
October 27-29, 2025

In an announcement on Monday, OpenAI claimed that the recently updated version of GPT-5 exhibits “desirable responses” to mental health issues, and responds approximately 65% ​​more than previous versions. In an evaluation that tested AI’s response to conversations about suicidal thoughts, OpenAI said the new GPT-5 model was 91% compliant with companies’ desired behaviors, compared to 77% compliant with the previous GPT-5 model.

The company also says that the latest version of GPT-5 can better preserve OpenAI’s protections during long conversations. OpenAI has previously warned that its security measures become less effective during long conversations.

In addition to these efforts, OpenAI says it is adding new assessments to measure some of the most serious mental health issues facing ChatGPT users. The company said baseline safety testing of the AI ​​model will include benchmarks for emotional dependence and non-suicidal mental health emergencies.

OpenAI recently rolled out more controls for parents of children using ChatGPT. The company said it is building an age prediction system that uses ChatGPT to automatically detect children and impose stricter protective measures.

Still, it’s unclear how long the mental health challenges surrounding ChatGPT will last. Although GPT-5 appears to be an improvement over previous AI models in terms of safety, there still appear to be some ChatGPT responses that OpenAI deems “undesirable.” OpenAI also makes older and less secure AI models, including GPT-4o, available to millions of paying subscribers.

If you or someone you know needs help, call the National Suicide Prevention Lifeline at 1-800-273-8255. You can also text HOME toll-free at 741-741. Text 988; or get 24-hour support from the Crisis Text Line. If you are outside the United States, visit the International Association for Suicide Prevention for a database of resources.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
whistle_949
  • Website

Related Posts

Solana co-founder Anatoly Yakovenko is a big fan of agent coding

October 29, 2025

Box CEO Aaron Levie talks about how AI is changing the landscape of enterprise SaaS

October 29, 2025

Disrupt 2025: Day 3 | Tech Crunch

October 29, 2025
Add A Comment

Comments are closed.

News

Trump-Xi meeting: What’s at stake and who has the upper hand? | Trade War News

By whistle_949October 29, 2025

United States President Donald Trump expects “a lot of problems” will be solved between Washington…

South Korea presents gift to President Trump as it works on more flexible trade deal | Donald Trump News

October 29, 2025

US Federal Reserve cuts interest rates in response to weak labor market | Banking News

October 29, 2025
Top Trending

Solana co-founder Anatoly Yakovenko is a big fan of agent coding

By whistle_949October 29, 2025

The rise of agent coding tools was a game-changer for software engineers…

Box CEO Aaron Levie talks about how AI is changing the landscape of enterprise SaaS

By whistle_949October 29, 2025

Box co-founder and CEO Aaron Levie doesn’t think AI agents will replace…

Disrupt 2025: Day 3 | Tech Crunch

By whistle_949October 29, 2025

Welcome to the third and final day of TechCrunch Disrupt 2025 at…

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Welcome to WhistleBuzz.com (“we,” “our,” or “us”). Your privacy is important to us. This Privacy Policy explains how we collect, use, disclose, and safeguard your information when you visit our website https://whistlebuzz.com/ (the “Site”). Please read this policy carefully to understand our views and practices regarding your personal data and how we will treat it.

Facebook X (Twitter) Instagram Pinterest YouTube

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Advertise With Us
  • Contact US
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
  • About US
© 2025 whistlebuzz. Designed by whistlebuzz.

Type above and press Enter to search. Press Esc to cancel.