Sixty-four percent of U.S. teenagers have used AI chatbots, with 57% using them to search for information — making AI chatbots the most common search tool for a generation still developing critical evaluation skills. The data comes from a Pew Research Center survey of teens ages 13-17, conducted September 25 to October 9, 2025, and published February 24, 2026.
What Teenagers Are Actually Doing
The top use cases reveal the scale of the shift. 57% search for information. 54% get help with schoolwork. 47% use chatbots for entertainment. Roughly 40% summarize articles, books, or videos. 28% use chatbots daily, with 16% doing so several times a day or “almost constantly.” 12% use chatbots for emotional support or advice.
Only 51% of parents say their child uses AI chatbots — a 13-point gap that suggests a significant portion of teenage AI use happens without parental awareness or oversight. The tools teenagers are using for information retrieval are the same tools that hallucinate sources and present fabricated facts with absolute confidence.
The Real Problem Isn’t Cheating
The regulatory and educational conversation around teenagers and AI has centered overwhelmingly on academic dishonesty — students using ChatGPT to write essays. The Pew data suggests that focus is misplaced. The more significant concern is 57% of teenagers using hallucination-prone tools as their primary information source, outside of any educational context where a teacher might catch errors.
When a chatbot invents a historical event, fabricates a scientific study, or attributes a quote to someone who never said it, a teenager searching for information has no built-in mechanism to detect the error. Unlike Google search results — which at least link to verifiable sources — chatbot responses present synthesized text with no source trail. The information literacy crisis is not theoretical. It is 57% of American teenagers treating unreliable tools as reliable ones, daily.
