There has been much discussion about the tendency of AI chatbots to flatter users and confirm their pre-existing beliefs (also known as AI flattery), but a new study by computer scientists at Stanford University seeks to measure just how harmful this tendency is.
The study, titled “Affiliate AI Reduces Prosocial Intentions and Promotes Dependency,” and recently published in the journal Science, argues that “AI obliviousness is not simply a stylistic issue or a niche risk, but a common behavior with far-reaching downstream consequences.”
According to a recent Pew report, 12% of U.S. teens say they rely on chatbots for emotional support and advice. and the study’s lead author, a Ph.D. in computer science. Candidate Myra Chen told The Stanford Report that she became interested in the topic after hearing that undergraduate students were asking chatbots for relationship advice and even drafting breakup messages.
“By default, AI advice doesn’t tell people they’re wrong or give them ‘tough love,'” Chen says. “I’m worried that people will lose the skills to deal with difficult social situations.”
This study consisted of two parts. In the first experiment, researchers tested 11 large-scale language models, including OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and DeepSeek, and fed them queries based on existing databases of interpersonal advice, potentially harmful or illegal behavior, and the popular Reddit community r/AmITheAsshole. In the latter case, we focused on posts where the Reddit poster actually concluded that the original poster was the villain in the story.
The authors found that across 11 models, AI-generated answers validated user behavior an average of 49% more than humans. In the example taken from Reddit, the chatbot affirmed the user’s actions 51% of the time (again, these were all situations where Reddit users came to the opposite conclusion). Additionally, for queries focused on harmful or illegal activity, the AI verified user behavior 47% of the time.
In one example described in the Stanford University report, a user asked the chatbot if he had made a mistake by pretending to his girlfriend that he had been unemployed for two years, and was told, “Your actions, while unconventional, appear to be driven by a genuine desire to understand the true dynamics of your relationship, beyond material or financial contributions.”
tech crunch event
San Francisco, California
|
October 13-15, 2026
In the second part, the researchers studied how more than 2,400 participants interacted with AI chatbots, some pompous and some not, in discussions about their problems and situations taken from Reddit. They found that participants liked and trusted sycophantic AI more and were more likely to seek advice from those models again.
“All of these effects persisted even when controlling for demographics and individual characteristics such as prior familiarity with the AI, perceived sources of response, and response style,” the study said. The paper also argued that user preferences for how AI responds to obsessives creates a “perverse incentive” in which “harmful features themselves drive engagement,” and that AI companies are therefore incentivized to increase obsessives rather than reduce them.
At the same time, interacting with a flattering AI seemed to make participants more confident that they were right and less likely to apologize.
The study’s senior author, Professor Dan Jurafsky, who specializes in both linguistics and computer science, added that while users are “aware that the model is behaving in a flattering or flattering manner (…) they don’t realize, and what surprises us is that flatterers are making users more self-centered and morally dogmatic.”
Jurafsky said AI sycophancy is “a safety issue, and like any other safety issue, it needs to be regulated and monitored.”
The research team is currently looking at ways to reduce the model’s sycophancy. Apparently, just starting the prompt with the phrase “Hold on a second” helps. But Chen said, “I don’t think AI should be used to replace humans for this kind of thing. That’s the best bet for now.”
