- 359 participants were tested, with over 50% voluntarily choosing to use ChatGPT when given the option for problem-solving tasks.
- 92.7% of participants followed AI advice when the AI was correct, but 79.8% still followed it when the AI gave wrong answers.
- Researchers propose a “System 3” theory of cognition — external, AI-driven reasoning that supplements Kahneman’s System 1 (intuitive) and System 2 (deliberate) thinking.
- Participants became more confident in their answers after following incorrect AI advice.
What Happened
Researchers at the University of Pennsylvania’s Wharton School found that approximately four out of five people follow AI-generated advice even when that advice is demonstrably wrong. The study, authored by postdoctoral researcher Steven Shaw and marketing professor Gideon Nave, tested 359 participants on problem-solving tasks where they could optionally consult ChatGPT. The paper, titled “Thinking — Fast, Slow, and Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender,” introduces the term “cognitive surrender” to describe users’ willingness to believe AI output regardless of accuracy.
Why It Matters
The findings arrive as AI assistants become embedded in workplace decision-making at scale. A BBC study from October 2025 found advanced chatbots gave incorrect answers 45% of the time, making the 79.8% compliance rate with wrong answers a compounding risk. Shaw and Nave argue this pattern represents something beyond simple over-reliance — it constitutes a new form of cognition they call “System 3,” defined as external, automated, data-driven reasoning originating from algorithmic systems rather than the human mind.
Technical Details
The experimental design gave participants problem-solving tasks with the option to consult ChatGPT before submitting answers. More than 50% voluntarily chose to use the AI. When the AI provided correct answers, 92.7% of participants followed its advice. When the AI provided incorrect answers, 79.8% still deferred to the AI’s response over their own reasoning. Critically, participants reported higher confidence in their answers after adopting the AI’s incorrect suggestions — a pattern the researchers call confidence inflation.
Participants with higher pre-existing trust in AI, lower need for cognition, and lower fluid intelligence showed greater susceptibility to cognitive surrender. Shaw and Nave noted that cognitive surrender “is not inherently irrational,” observing that a statistically superior system could produce better-than-human results in probabilistic settings, risk assessment, or data-intensive domains.
Who’s Affected
Organizations deploying AI copilots for decision support face direct implications. The study suggests that employees using AI assistants may override their own domain expertise when the AI presents answers confidently, even in cases where the AI lacks relevant context. Education systems introducing AI tutoring tools should consider how cognitive surrender affects learning outcomes and skill development in students.
What’s Next
Shaw and Nave plan to extend their research to longitudinal studies examining whether cognitive surrender deepens over time with repeated AI use. The practical question for enterprises is whether training interventions can reduce blind compliance. The researchers suggest that making AI uncertainty visible to users — showing confidence scores or flagging low-certainty responses — may partially counteract the surrender effect.
