Close Menu
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
What's Hot

Manchester United 3-2 Fulham: Marco Silva claims referee John Brooks’ decision was ‘very bad’ and VAR found another foul Soccer News

February 1, 2026

China’s factory activity rises at fastest pace since October, private survey outpaces official report

February 1, 2026

Hang Seng Index, CSI 300, Gold, Silver

February 1, 2026
Facebook X (Twitter) Instagram
WhistleBuzz – Smart News on AI, Business, Politics & Global Trends
Facebook X (Twitter) Instagram
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
WhistleBuzz – Smart News on AI, Business, Politics & Global Trends
Home » According to OpenAI, more than 1 million people consult ChatGPT every week about suicide.
AI

According to OpenAI, more than 1 million people consult ChatGPT every week about suicide.

Editor-In-ChiefBy Editor-In-ChiefOctober 27, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email


OpenAI on Monday released new data showing how many of ChatGPT’s users are struggling with mental health issues and consulting the AI ​​chatbot about it. The company says that in any given week, 0.15% of ChatGPT’s active users engage in “conversations that include clear signs of potential suicidal plans or intentions.” Considering ChatGPT has over 800 million weekly active users, this equates to over 1 million users per week.

The company says a similar proportion of users exhibit “increased levels of emotional attachment to ChatGPT,” and hundreds of thousands show signs of psychosis or mania in their weekly conversations with the AI ​​chatbot.

OpenAI says this type of conversation on ChatGPT is “extremely rare” and therefore difficult to measure. However, the company estimates that these issues affect hundreds of thousands of people each week.

OpenAI shared this information as part of a broader announcement about recent efforts to improve how models respond to users with mental health issues. The company claims its latest work on ChatGPT includes consulting with more than 170 mental health experts. OpenAI says these clinicians observed that the latest version of ChatGPT “responds better and more consistently than previous versions.”

In recent months, several articles have come to light about how AI chatbots can negatively impact users suffering from mental health issues. Researchers have previously found that AI chatbots can lead some users down paranoid rabbit holes, primarily by reinforcing dangerous beliefs through sycophantic behavior.

Addressing mental health issues in ChatGPT is quickly becoming an existential issue for OpenAI. The company is currently being sued by the parents of a 16-year-old boy who expressed suicidal thoughts on ChatGPT in the weeks leading up to his suicide. California and Delaware attorneys general have also warned OpenAI that it needs to protect young people who use its products, which could thwart the company’s reorganization plans.

Earlier this month, OpenAI CEO Sam Altman claimed in a post on X that the company was able to “mitigate serious mental health issues” in ChatGPT, without providing details. The data shared Monday appears to be evidence of that claim, but raises broader questions about how widespread the problem is. Nevertheless, Altman said OpenAI will ease some restrictions and also allow adult users to initiate sexual conversations with AI chatbots.

tech crunch event

san francisco
|
October 27-29, 2025

In an announcement on Monday, OpenAI claimed that the recently updated version of GPT-5 exhibits “desirable responses” to mental health issues, and responds approximately 65% ​​more than previous versions. In an evaluation that tested AI’s response to conversations about suicidal thoughts, OpenAI said the new GPT-5 model was 91% compliant with companies’ desired behaviors, compared to 77% compliant with the previous GPT-5 model.

The company also says that the latest version of GPT-5 can better preserve OpenAI’s protections during long conversations. OpenAI has previously warned that its security measures become less effective during long conversations.

In addition to these efforts, OpenAI says it is adding new assessments to measure some of the most serious mental health issues facing ChatGPT users. The company said baseline safety testing of the AI ​​model will include benchmarks for emotional dependence and non-suicidal mental health emergencies.

OpenAI recently rolled out more controls for parents of children using ChatGPT. The company said it is building an age prediction system that uses ChatGPT to automatically detect children and impose stricter protective measures.

Still, it’s unclear how long the mental health challenges surrounding ChatGPT will last. Although GPT-5 appears to be an improvement over previous AI models in terms of safety, there still appear to be some ChatGPT responses that OpenAI deems “undesirable.” OpenAI also makes older and less secure AI models, including GPT-4o, available to millions of paying subscribers.

If you or someone you know needs help, call the National Suicide Prevention Lifeline at 1-800-273-8255. You can also text HOME toll-free at 741-741. Text 988; or get 24-hour support from the Crisis Text Line. If you are outside the United States, visit the International Association for Suicide Prevention for a database of resources.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Editor-In-Chief
  • Website

Related Posts

AI staff reduction or “AI cleaning”? |Tech Crunch

February 1, 2026

India to cut taxes to zero until 2047 to attract global AI workloads

February 1, 2026

Indonesia lifts Grok ban ‘conditionally’

February 1, 2026
Add A Comment

Comments are closed.

News

5-year-old boy and father detained by ICE return to Minnesota | Migration News

By Editor-In-ChiefFebruary 1, 2026

Liam Conejo Ramos and his father Adrian were escorted home by Texas Democratic Congressman Joaquin…

President Trump orders federal employees to stay away from protests in Democratic cities | Donald Trump News

January 31, 2026

Iranian officials say progress has been made in negotiations amid ongoing tensions between the US and Iran | Conflict News

January 31, 2026
Top Trending

AI staff reduction or “AI cleaning”? |Tech Crunch

By Editor-In-ChiefFebruary 1, 2026

How many of the companies that have recently made layoffs have truly…

India to cut taxes to zero until 2047 to attract global AI workloads

By Editor-In-ChiefFebruary 1, 2026

As the global race to build AI infrastructure accelerates, India has offered…

Indonesia lifts Grok ban ‘conditionally’

By Editor-In-ChiefFebruary 1, 2026

Indonesia follows Malaysia and the Philippines in lifting the ban on xAI’s…

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Welcome to WhistleBuzz.com (“we,” “our,” or “us”). Your privacy is important to us. This Privacy Policy explains how we collect, use, disclose, and safeguard your information when you visit our website https://whistlebuzz.com/ (the “Site”). Please read this policy carefully to understand our views and practices regarding your personal data and how we will treat it.

Facebook X (Twitter) Instagram Pinterest YouTube

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Advertise With Us
  • Contact US
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
  • About US
© 2026 whistlebuzz. Designed by whistlebuzz.

Type above and press Enter to search. Press Esc to cancel.