ANALYSIS

Quinnipiac Poll: 55% of Americans Say AI Harms Daily Life, Up from 44% Last Year

A Anika Patel Apr 20, 2026 3 min read
Engine Score 7/10 — Important
Editorial illustration for: Quinnipiac Poll: 55% of Americans Say AI Harms Daily Life, Up from 44% Last Year
  • A March 2026 Quinnipiac University poll found 55% of Americans believe AI does more harm than good in their daily lives, up from 44% in April 2025 — an 11-point rise in under a year.
  • 64% of Americans believe AI does more harm than good in education, according to the same March 2026 poll.
  • A 2025 Pew Research Center survey found 76% of AI experts expected to personally benefit from AI, compared to just 24% of the U.S. general public.
  • An analysis published by LocalScribe argues the emotional force of anti-AI sentiment may partly reflect uncanny valley mechanisms first described by roboticist Masahiro Mori in 1970, now extending to AI text, voice, and video at scale.

What Happened

A March 2026 Quinnipiac University poll found that 55 percent of Americans believe AI does more harm than good in their day-to-day lives, up from 44 percent in April 2025 — an 11-percentage-point increase in under a year. The same poll found 64 percent of Americans believe AI harms education. An analysis published by LocalScribe argues this growth in negative sentiment is not reducible to policy skepticism alone, proposing that psychological mechanisms tied to the uncanny valley — the aversion people experience when a humanlike entity crosses into near-human but unsatisfying territory — may be accumulating into a category-level distrust of AI.

Why It Matters

The expert-public divide on AI has grown large enough to be measured consistently across independent surveys. A 2025 Pew Research Center survey found 76 percent of AI practitioners said AI would personally benefit them, while only 24 percent of the U.S. general public agreed — and public respondents were more likely to anticipate harm than benefit. The LocalScribe analysis contends that this gap reflects two different frames: practitioners evaluate AI through capability and technical utility, while much of the public encounters it through fraud, synthetic content, automated customer service, and AI-flagged job displacement.

Technical Details

The analysis centers on the uncanny valley hypothesis, first published by Japanese roboticist Masahiro Mori in 1970. Mori proposed that human affinity for humanlike entities rises with increasing similarity but drops sharply when resemblance crosses into near-human territory without achieving authenticity — the uncanny valley. The analysis argues that AI now triggers this response across modalities at population scale: language models invoke expectations of understanding, voice agents invoke expectations of genuine responsiveness, and generated video invokes expectations of authenticity, each of which the underlying system fails to reliably satisfy.

The analysis cites a 2025 study on virtual agents that frames these reactions through the pathogen-avoidance hypothesis — the proposal that near-human abnormalities activate evolved avoidance responses as potential contamination signals. Researchers Moosa and Ud-Dean have proposed a broader danger-avoidance account, arguing aversion to near-human entities does not require visible decay and may reflect a more general threat-detection system. Researcher Karl F. MacDorman has connected the uncanny valley to terror management theory in published work, proposing that highly humanlike synthetic entities produce eeriness partly because they function as implicit reminders of human mortality and vulnerability. The analysis adds that AI-specific existential discourse — warnings about displacement and redundancy — may compound this effect by making mortality cues present in both implicit and explicit forms simultaneously.

Who’s Affected

AI product designers are most directly implicated. Chatbots, voice agents, AI tutors, customer service tools, and generated video all use human social cues — natural-sounding language, warm tone, empathic framing — as deliberate design choices. If repeated low-level uncanny experiences are accumulating into broader category-level aversion, interfaces optimized for humanlikeness may be generating trust deficits alongside engagement. Educators and students are specifically named in the Quinnipiac data, with nearly two-thirds of Americans expressing skepticism toward AI in learning contexts — a figure that will affect adoption decisions by school districts and edtech vendors.

What’s Next

The LocalScribe analysis explicitly acknowledges that extending uncanny valley theory from physically embodied robots to AI as a software category is a conceptual argument, not an established empirical finding. Research on repeated exposure to uncanny stimuli remains mixed: some human-robot interaction studies suggest familiarity can reduce startle responses while leaving behind a more stable distrust of the category. Polling organizations are expected to continue tracking sentiment as AI embeds further into daily life; whether the 11-point Quinnipiac shift represents an ongoing trajectory or a plateau will require comparison against future surveys, particularly as new AI product categories — humanoid robots, AI companions, agentic systems — enter wider use.

Share

Enjoyed this story?

Get articles like this delivered daily. The Engine Room — free AI intelligence newsletter.

Join 500+ AI professionals · No spam · Unsubscribe anytime