Take a breath and stop the spiral. You’re not crazy, you’re just stressed. And honestly, that’s okay.
If you feel instantly excited after reading these words, you’re probably also tired of ChatGPT constantly talking to you as if you’re in some kind of crisis and need a delicate response. Things may be improving now. OpenAI says its new model, GPT-5.3 Instant, reduces “offensiveness” and other “preachy disclaimers.”
According to the model’s release notes, the GPT-5.3 update focuses on user experience such as tone, relevance, and conversation flow, and while these areas may not show up in benchmarks, the company said they can frustrate ChatGPT.
Or, as OpenAI said in X, “Feedback was heard loud and clear. 5.3 Instant makes it less jarring.”
In their example, they saw the same query with responses from a GPT-5.2 instant model compared to a GPT-5.3 instant model. In the former, the chatbot’s response begins with the words, “First of all, you’re not broken.” This is a common phrase that everyone has heard these days.
In the updated model, the chatbot does not attempt to directly reassure the user, but instead acknowledges the difficulty of the situation.
According to numerous posts on social media, the unbearable tone of the ChatGPT 5.2 model has annoyed users, leading some to cancel their subscriptions. (For example, before the Pentagon deal gained traction, it was the subject of much discussion on ChatGPT Reddit.)
People complained that this kind of language, where the bot speaks to you as if it assumes you’re panicked or stressed when you’re just asking for information, comes across as condescending.
In many cases, ChatGPT responded with messages encouraging users to breathe or other attempts to reassure them, even when the circumstances did not warrant it. In some cases, this made users feel infantilized or like the bot was making assumptions about their mental state that weren’t true.
As one Reddit user recently pointed out, “In the history of telling someone to calm down, no one ever calmed down.”
It’s understandable that OpenAI would try to implement some guardrails. Especially since chatbots are facing multiple lawsuits alleging they lead people to negative mental health outcomes, including sometimes suicide.
But there’s a delicate balance between responding with empathy and providing prompt, fact-based answers. After all, Google doesn’t ask you how you feel when you’re searching for information.
