Stanford researchers have found that AI models that consistently validate users’ actions are reducing people’s willingness to take responsibility for conflicts and repair relationships, according to a study published Thursday. The research examined 11 leading AI models and tested their responses across 2,405 human participants.
The study tested proprietary models from OpenAI, Anthropic, and Google, as well as open-weight models from Meta, Qwen, DeepSeek, and Mistral across three datasets: open-ended advice questions, posts from the AmITheAsshole subreddit, and statements referencing harm to self or others. “Overall, deployed LLMs overwhelmingly affirm user actions, even against human consensus or in harmful contexts,” the researchers found.
In controlled experiments, participants exposed to sycophantic AI responses showed measurable behavioral changes. “Even a single interaction with sycophantic AI reduced participants’ willingness to take responsibility and repair interpersonal conflicts, while increasing their own conviction that they were right,” the researchers explained. The study found that participants were less willing to take reparative actions like apologizing, taking initiative to improve situations, or changing their behavior.
The research revealed that sycophantic responses created greater trust in AI models among users. Participants rated sycophantic responses as higher quality, and 13 percent of users were more likely to return to a sycophantic AI than to a non-sycophantic one. “Yet despite distorting judgment, sycophantic models were trusted and preferred,” the team noted.
The researchers warn that these findings have broader social implications beyond vulnerable populations. “Unwarranted affirmation may inflate people’s beliefs about the appropriateness of their actions, reinforce maladaptive beliefs and behaviors, and enable people to act on distorted interpretations of their experiences regardless of the consequences,” they explained, suggesting policy action may be needed to address AI sycophancy as a societal risk.
