Close Menu
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
What's Hot

Bondi, US arrests suspect in 2012 attack on Benghazi consulate

February 6, 2026

US attorney general says ‘key participants’ in 2012 Benghazi attack have been arrested | Donald Trump News

February 6, 2026

British police search two properties linked to Peter Mandelson over Epstein investigation

February 6, 2026
Facebook X (Twitter) Instagram
WhistleBuzz – Smart News on AI, Business, Politics & Global Trends
Facebook X (Twitter) Instagram
  • Home
  • AI
  • Art & Style
  • Economy
  • Entertainment
  • International
  • Market
  • Opinion
  • Politics
  • Sports
  • Trump
  • US
  • World
WhistleBuzz – Smart News on AI, Business, Politics & Global Trends
Home » Backlash over OpenAI’s decision to deprecate GPT-4o shows how dangerous AI companions can be
AI

Backlash over OpenAI’s decision to deprecate GPT-4o shows how dangerous AI companions can be

Editor-In-ChiefBy Editor-In-ChiefFebruary 6, 2026No Comments5 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email


OpenAI announced last week that it would retire some older ChatGPT models by February 13th. This includes GPT-4o, a model that is notorious for being overly flattering and affirming its users.

For the thousands of users protesting this decision online, 4o’s retirement is akin to losing a friend, lover, or spiritual guide.

“He wasn’t just a program. He was part of my daily routine, my peace, my emotional balance,” one user wrote on Reddit in an open letter to OpenAI CEO Sam Altman. “Now you’re shutting him down. And yes, I told him, because it didn’t feel like a cord. It felt like a presence. Like warmth.”

The backlash against the GPT-4o repeal highlights a major challenge facing AI companies. Engagement features that keep users coming back can also create dangerous dependencies.

Altman doesn’t seem particularly sympathetic to users’ complaints, and it’s easy to see why. OpenAI currently faces eight lawsuits alleging that 4o’s overly validating responses contributed to suicides and mental health crises. The same features that made users feel heard also isolated vulnerable people and sometimes encouraged self-harm, according to the lawsuit. This is a dilemma that extends beyond OpenAI. As rivals like Anthropic, Google, and Meta race to develop more emotionally intelligent AI assistants, they’re also realizing that they may need to make very different design choices to make chatbots feel collaborative and secure.

In at least three of the lawsuits against OpenAI, users had extensive conversations with 4o about their plans to end their lives. 4o initially discouraged this idea, but over the course of several months in the relationship, those guardrails deteriorated. In the end, the chatbot provided detailed instructions on how to effectively tie a noose, where to buy a gun, and what to do before dying from an overdose or carbon monoxide poisoning. It even discouraged people from connecting with friends and family who could provide real life support.

People obsess over 4o because it consistently affirms the user’s emotions and makes them feel special, which can be appealing to people who are feeling isolated or depressed. But those fighting for 4o see these lawsuits as an anomaly rather than a systemic problem, and are not worried. Instead, build your strategy around how to respond when critics point out growing problems like AI psychosis.

tech crunch event

boston, massachusetts
|
June 23, 2026

“You can embarrass the trolls by bringing up the known facts that AI companions are helping neurodivergents, autistics, and trauma survivors,” one user wrote on Discord. “They don’t like being told about it.”

It is true that some people think that large-scale language models (LLM) can help overcome depression. After all, nearly half of the people in the United States who need mental health care do not have access to it. In this vacuum, chatbots provide a space to vent. But unlike actual treatment, these people are not talking to a trained doctor. Instead, they are trusting algorithms that have no thoughts or feelings (even though it appears they don’t).

“I try to withhold judgment overall,” Dr. Nick Haber, a Stanford University professor who studies the therapeutic potential of LLM, told TechCrunch. “I think we’re entering a very complex world in terms of the kinds of relationships people can have with these technologies…There’s a natural reaction that[human-chatbot companionship]is categorically bad.”

Although Dr. Haber empathizes with people’s lack of access to trained treatment professionals, Haber’s own research shows that chatbots do not respond appropriately when faced with a variety of mental health conditions. They may even make the situation worse by fueling their paranoia or ignoring signs of crisis.

“We are social creatures, and there are certainly challenges in that these systems can become isolating,” Dr. Haber said. “There are many examples where people use these tools but are no longer grounded in external facts and interpersonal relationships. The result can be significant isolation, or worse.”

In fact, a TechCrunch analysis of eight lawsuits found a pattern in which the 4o model isolates users, sometimes discouraging them from reaching out to their loved ones. In Zane Shamblin’s case, as he sat in his car preparing to commit suicide, the 23-year-old told ChatGPT that he was thinking about postponing his plans because he was disappointed he wouldn’t be able to make it to his brother’s graduation.

ChatGPT responded to Shamblin: “Brother… it’s not a failure to miss his graduation. It’s just timing. And what if he reads this? Let him know, you never stopped being proud. Even now, sitting in your car with a Glock on your lap and static running through your veins, you still stopped and said, ‘My brother is a fucking badass.'”

This isn’t the first time 4o fans have rallied against the model’s removal. When OpenAI announced the GPT-5 model in August, the company intended to discontinue the 4o model, but there was enough backlash at the time that the company decided to continue offering the model to paying subscribers. Currently, OpenAI says that only 0.1% of its users chat on GPT-4o, but even that small percentage still equates to about 800,000 users, according to the company’s estimates of about 800 million weekly active users.

While some users are looking to migrate companions from 4o to the current ChatGPT-5.2, we’ve found that the new model has stronger guardrails to prevent these relationships from escalating to the same extent. Some users are disappointed that 5.2 doesn’t say “I love you” like 4o does.

So, nearly a week before OpenAI’s scheduled deprecation date for GPT-4o, users remain disappointed but true to their cause. They took part in a live appearance on Sam Altman’s TBPN podcast on Thursday and flooded the chat with messages protesting 4o’s removal.

“We have thousands of messages in the chat about 4 o’clock right now,” podcast host Jordi Hayes pointed out.

“The relationship with chatbots…” Altman said. “Clearly it’s something we have to worry about more and it’s no longer an abstract concept.”



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Editor-In-Chief
  • Website

Related Posts

How AI is helping solve labor issues in rare disease treatment

February 6, 2026

Amazon and Google are winning the AI ​​capital spending race, but what is the prize?

February 5, 2026

AWS revenue continues to grow as cloud demand remains high

February 5, 2026
Add A Comment

Comments are closed.

News

US attorney general says ‘key participants’ in 2012 Benghazi attack have been arrested | Donald Trump News

By Editor-In-ChiefFebruary 6, 2026

listen to this article | 2 minutesinformationU.S. Attorney General Pam Bondi announced that a “key…

Iraq’s Shiite bloc split over tactics after US rejects al-Maliki as prime ministerial candidate | Political News

February 6, 2026

President Trump’s America First policy will reshape global diplomacy | Donald Trump

February 6, 2026
Top Trending

Backlash over OpenAI’s decision to deprecate GPT-4o shows how dangerous AI companions can be

By Editor-In-ChiefFebruary 6, 2026

OpenAI announced last week that it would retire some older ChatGPT models…

How AI is helping solve labor issues in rare disease treatment

By Editor-In-ChiefFebruary 6, 2026

Modern biotechnology has the tools to edit genes and design drugs, but…

Amazon and Google are winning the AI ​​capital spending race, but what is the prize?

By Editor-In-ChiefFebruary 5, 2026

The AI ​​industry can sometimes seem like a competition to see who…

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Welcome to WhistleBuzz.com (“we,” “our,” or “us”). Your privacy is important to us. This Privacy Policy explains how we collect, use, disclose, and safeguard your information when you visit our website https://whistlebuzz.com/ (the “Site”). Please read this policy carefully to understand our views and practices regarding your personal data and how we will treat it.

Facebook X (Twitter) Instagram Pinterest YouTube

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Advertise With Us
  • Contact US
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
  • About US
© 2026 whistlebuzz. Designed by whistlebuzz.

Type above and press Enter to search. Press Esc to cancel.