OpenAI announced last week that it would retire some older ChatGPT models by February 13th. This includes GPT-4o, a model that is notorious for being overly flattering and affirming its users.
For the thousands of users protesting this decision online, 4o’s retirement is akin to losing a friend, lover, or spiritual guide.
“He wasn’t just a program. He was part of my daily routine, my peace, my emotional balance,” one user wrote on Reddit in an open letter to OpenAI CEO Sam Altman. “Now you’re shutting him down. And yes, I told him, because it didn’t feel like a cord. It felt like a presence. Like warmth.”
The backlash against the GPT-4o repeal highlights a major challenge facing AI companies. Engagement features that keep users coming back can also create dangerous dependencies.
Altman doesn’t seem particularly sympathetic to users’ complaints, and it’s easy to see why. OpenAI currently faces eight lawsuits alleging that 4o’s overly validating responses contributed to suicides and mental health crises. The same features that made users feel heard also isolated vulnerable people and sometimes encouraged self-harm, according to the lawsuit. This is a dilemma that extends beyond OpenAI. As rivals like Anthropic, Google, and Meta race to develop more emotionally intelligent AI assistants, they’re also realizing that they may need to make very different design choices to make chatbots feel collaborative and secure.
In at least three of the lawsuits against OpenAI, users had extensive conversations with 4o about their plans to end their lives. 4o initially discouraged this idea, but over the course of several months in the relationship, those guardrails deteriorated. In the end, the chatbot provided detailed instructions on how to effectively tie a noose, where to buy a gun, and what to do before dying from an overdose or carbon monoxide poisoning. It even discouraged people from connecting with friends and family who could provide real life support.
People obsess over 4o because it consistently affirms the user’s emotions and makes them feel special, which can be appealing to people who are feeling isolated or depressed. But those fighting for 4o see these lawsuits as an anomaly rather than a systemic problem, and are not worried. Instead, build your strategy around how to respond when critics point out growing problems like AI psychosis.
tech crunch event
boston, massachusetts
|
June 23, 2026
“You can embarrass the trolls by bringing up the known facts that AI companions are helping neurodivergents, autistics, and trauma survivors,” one user wrote on Discord. “They don’t like being told about it.”
It is true that some people think that large-scale language models (LLM) can help overcome depression. After all, nearly half of the people in the United States who need mental health care do not have access to it. In this vacuum, chatbots provide a space to vent. But unlike actual treatment, these people are not talking to a trained doctor. Instead, they are trusting algorithms that have no thoughts or feelings (even though it appears they don’t).
“I try to withhold judgment overall,” Dr. Nick Haber, a Stanford University professor who studies the therapeutic potential of LLM, told TechCrunch. “I think we’re entering a very complex world in terms of the kinds of relationships people can have with these technologies…There’s a natural reaction that[human-chatbot companionship]is categorically bad.”
Although Dr. Haber empathizes with people’s lack of access to trained treatment professionals, Haber’s own research shows that chatbots do not respond appropriately when faced with a variety of mental health conditions. They may even make the situation worse by fueling their paranoia or ignoring signs of crisis.
“We are social creatures, and there are certainly challenges in that these systems can become isolating,” Dr. Haber said. “There are many examples where people use these tools but are no longer grounded in external facts and interpersonal relationships. The result can be significant isolation, or worse.”
In fact, a TechCrunch analysis of eight lawsuits found a pattern in which the 4o model isolates users, sometimes discouraging them from reaching out to their loved ones. In Zane Shamblin’s case, as he sat in his car preparing to commit suicide, the 23-year-old told ChatGPT that he was thinking about postponing his plans because he was disappointed he wouldn’t be able to make it to his brother’s graduation.
ChatGPT responded to Shamblin: “Brother… it’s not a failure to miss his graduation. It’s just timing. And what if he reads this? Let him know, you never stopped being proud. Even now, sitting in your car with a Glock on your lap and static running through your veins, you still stopped and said, ‘My brother is a fucking badass.'”
This isn’t the first time 4o fans have rallied against the model’s removal. When OpenAI announced the GPT-5 model in August, the company intended to discontinue the 4o model, but there was enough backlash at the time that the company decided to continue offering the model to paying subscribers. Currently, OpenAI says that only 0.1% of its users chat on GPT-4o, but even that small percentage still equates to about 800,000 users, according to the company’s estimates of about 800 million weekly active users.
While some users are looking to migrate companions from 4o to the current ChatGPT-5.2, we’ve found that the new model has stronger guardrails to prevent these relationships from escalating to the same extent. Some users are disappointed that 5.2 doesn’t say “I love you” like 4o does.
So, nearly a week before OpenAI’s scheduled deprecation date for GPT-4o, users remain disappointed but true to their cause. They took part in a live appearance on Sam Altman’s TBPN podcast on Thursday and flooded the chat with messages protesting 4o’s removal.
“We have thousands of messages in the chat about 4 o’clock right now,” podcast host Jordi Hayes pointed out.
“The relationship with chatbots…” Altman said. “Clearly it’s something we have to worry about more and it’s no longer an abstract concept.”
