ANALYSIS

304 AI-Generated Pro-Trump Accounts Identified on TikTok Ahead of 2026 Midterms

E Elena Volkov Apr 19, 2026 3 min read
Engine Score 7/10 — Important
Editorial illustration for: 304 AI-Generated Pro-Trump Accounts Identified on TikTok Ahead of 2026 Midterms
  • The New York Times tracked at least 304 AI-generated pro-Trump accounts on TikTok since January 2026, with independent researchers finding additional clusters on Instagram, Facebook, and YouTube.
  • Some accounts surpassed 35,000 followers and individual posts exceeded 500,000 views before being flagged by researchers.
  • Zuhair Lakhani, co-founder of AI advertising startup Doublespeed, told the Times each post costs an estimated $1–$3 to produce, making large-scale synthetic political content viable for a single operator.
  • TikTok classified the accounts as spam rather than coordinated influence operations, a characterization in tension with structural evidence cited in the Times investigation.

What Happened

At least 304 AI-generated accounts pushing pro-Trump political messaging appeared on TikTok between January and April 2026, according to a New York Times investigation reported by The Decoder. The accounts use synthetic video avatars — not real people — to deliver “America First” messaging and attacks on the “radical left” ahead of the November 2026 US midterm elections. Donald Trump himself shared content from one such account: a synthetic blonde avatar making unsubstantiated claims about California’s governor. The identities of those operating the accounts have not been publicly established.

Why It Matters

The findings demonstrate that AI-generated video avatars are now a viable tool for sustained political messaging at scale, with economics that remove the operational barriers that limited earlier synthetic media campaigns. Zuhair Lakhani, co-founder of AI advertising startup Doublespeed, told the New York Times that each post costs an estimated $1 to $3 to produce and that a single person could manage the full content workload. Earlier documented cases of synthetic media in elections typically involved isolated deepfake clips or AI-generated audio; this campaign involves a sustained, multi-platform account network producing original video content at volume without platform intervention until third-party researchers surfaced it.

The pattern is not limited to the United States. AI-generated video and messaging also circulated during Japan’s recent lower house election. A survey by Professor Shinichi Yamaguchi of the International University of Japan found that 51.5 percent of respondents believed AI-generated election-related content to be factually accurate — a figure that illustrates the detection challenge synthetic media poses for general audiences without technical context.

Technical Details

Purdue University’s GRAIL research lab independently identified at least a dozen additional synthetic accounts across TikTok, Instagram, and Facebook not included in the Times’ initial count of 304. Eric Nelson, an analyst at security firm Alethea, separately found nine accounts using similar tactics on YouTube. The convergence of three independent investigations points to a multi-platform deployment rather than a TikTok-isolated phenomenon.

The New York Times’ structural analysis identified several coordination signatures: identical language patterns, shared profile pictures, and matching audio effects used across nominally separate accounts. The same AI-rendered characters appeared on multiple accounts — including a blonde woman with pigtails in a farm setting and a Black woman in a red MAGA cap and aviator goggles. A subset of accounts followed each other directly. TikTok stated its review found “zero indication of covert influence operations,” classifying the activity as engagement-driven spam rather than politically motivated coordination. The Times’ structural findings are difficult to reconcile with that characterization. No comparable left-leaning synthetic account networks were identified by any of the three research teams.

Who’s Affected

TikTok, Instagram, Facebook, and YouTube face direct moderation demands as the accounts reached substantial audiences before detection. At TikTok’s current reactive posture — removal after researchers surface accounts rather than proactive suppression — individual posts had already exceeded 500,000 views and some accounts surpassed 35,000 followers before any action was taken. The accounts made unsubstantiated factual claims without disclosing that the speakers were synthetic avatars, a disclosure gap illustrated by the case of Trump amplifying one such clip to his own audience.

Researchers at Purdue’s GRAIL lab and commercial intelligence firms such as Alethea represent the primary detection infrastructure currently monitoring for additional networks. No regulatory body or platform has announced a systematic response as of April 19, 2026.

What’s Next

The operators behind the account networks have not been identified. TikTok announced plans to remove the flagged accounts but has not provided a timeline or described what, if any, proactive measures would prevent new synthetic account networks from appearing before November. Purdue’s GRAIL lab and Alethea are continuing to monitor for additional clusters. With the midterm election cycle still months away, the three research teams’ findings represent an early-stage snapshot of a tactic whose full electoral-cycle scale remains unknown.

Share

Enjoyed this story?

Get articles like this delivered daily. The Engine Room — free AI intelligence newsletter.

Join 500+ AI professionals · No spam · Unsubscribe anytime