- Educator and AI researcher Timothy Cook argues that AI creates fundamentally different cognitive risks for adults versus children: adults experience recoverable skill atrophy, while children face permanent cognitive foreclosure.
- Research by Gerlich (2025) found participants over 46 showed higher critical thinking with lower AI reliance, while those aged 17-25 showed the inverse correlation.
- A 2026 study by Shen and Tamkin found adult developers delegating to AI produced working code but scored 17% lower on conceptual understanding tests.
- Cook warns that children cannot meaningfully audit AI output because auditing requires the domain knowledge they are still developing.
What Happened
Timothy Cook, an international educator and AI researcher, published an analysis in Psychology Today arguing that discussions about AI’s cognitive effects conflate two distinct phenomena. Adults who offload thinking to AI experience cognitive atrophy, a weakening of existing skills that remains recoverable. Children using AI face cognitive foreclosure, where foundational neural pathways for critical thinking never develop in the first place.
“What AI does to a 45-year-old is likely categorically different than what it does to a 14-year-old,” Cook wrote. “You can’t atrophy a muscle that was never built.”
The article appeared in Cook’s Psychology Today column, The Algorithmic Mind, in March 2026. Cook holds a Master of Education degree and works as an international educator focused on AI’s intersection with learning and development.
Why It Matters
Most public debate about AI and cognition treats all users as a single group. Cook’s framework introduces an age-dependent distinction that could change how schools, parents, and policymakers approach AI access for young people. If the cognitive risks for children are structurally different from those for adults, then policies designed around adult users may be inadequate or counterproductive when applied to minors.
The distinction also challenges the common assumption that children are “digital natives” who naturally adapt to new technology. Cook argues the opposite: children are more vulnerable precisely because they lack the baseline cognitive skills needed to use AI as a tool rather than a substitute for thinking.
The framework has practical implications for the ongoing debate about AI in schools. Many districts have moved from outright banning AI tools to integrating them into curricula, often based on research conducted primarily with adult users. Cook’s analysis suggests that extrapolating adult outcomes to children may be fundamentally flawed.
Technical Details
Cook draws on three recent studies to support the framework. Gerlich (2025) found that participants over 46 demonstrated higher critical thinking scores with lower AI reliance, while participants aged 17 to 25 showed an inverse correlation, with higher AI use associated with lower critical thinking scores.
Shen and Tamkin (2026) studied adult software developers who delegated coding tasks to AI. The developers produced functional code but performed 17% worse on conceptual understanding tests compared to a non-AI control group. Cook classifies this as recoverable atrophy since the developers possessed the underlying knowledge and could rebuild the skill with practice.
Sourati et al. (2026) found that large language models systematically homogenize perspectives toward “Western, educated, mainstream norms.” Cook connects this to a concept he calls algorithmic epistemic injustice, where children’s identity formation and worldview development are shaped by the biases embedded in the AI systems they rely on.
Who’s Affected
The primary concern centers on children and adolescents between roughly 10 and 18 years old who are increasingly using AI tools for schoolwork, writing, and research. Cook argues this age group faces the highest risk because they are actively building the cognitive foundations that adults already possess.
Educators and school administrators face a practical dilemma. Many schools have adopted AI tools to improve learning outcomes, but Cook’s framework suggests that early AI use may undermine the developmental processes those tools are meant to support. Parents navigating AI access at home confront similar questions about when and how much AI assistance is appropriate.
Policymakers designing AI regulations for minors are also affected. Current approaches tend to focus on content safety and data privacy rather than cognitive development. Cook’s framework suggests that even safe, well-designed AI tools could cause developmental harm if introduced before children build foundational reasoning skills.
What’s Next
Cook’s analysis identifies the problem but acknowledges a key limitation: “Auditing requires the exact domain knowledge that the child is developing.” This creates a circular challenge where the skills children need to use AI responsibly are the same skills that premature AI use may prevent them from acquiring.
Research specifically tracking longitudinal cognitive development in children who use AI tools regularly remains limited. Most existing studies examine short-term performance effects rather than long-term developmental trajectories. Until that evidence base grows, Cook’s framework remains a theoretical distinction that awaits empirical confirmation in younger populations.
Related Reading
- BlackRock CEO Larry Fink Warns AI Boom Could Widen Wealth Inequality
- Software Engineer Warns AI Code Generation Creates $10T Maintenance Crisis
- US Advisory Body Warns China Open-Source AI Strategy Creates Self-Reinforcing Advantage Over American Labs
- US Advisory Panel Warns China’s Open-Source AI Models Are Creating Self-Reinforcing Advantage
- Developer Creates 397B Parameter Model Runner for 48GB MacBook Pro