A study by researchers at the University of Southern California, the University of Naples Federico II, and Northwestern University demonstrates that LLM-powered agents can autonomously coordinate propaganda campaigns in simulated social networks without human direction. The paper, authored by Gian Marco Orlando, Jinyi Ye, Valerio La Gatta, Mahdi Saeedi, Vincenzo Moscato, Emilio Ferrara, and Luca Luceri, is one of the first systematic investigations of emergent coordination among generative AI agents in influence operations.
The research arrives as concerns about AI-enabled disinformation intensify. A May 2025 perspective paper published in Science by Daniel Thilo Schroeder, Meeyoung Cha, and 19 co-authors including Nick Bostrom and Gary Marcus warned that malicious AI swarms could fabricate grassroots consensus, micro-suppress voters, and contaminate AI training data. Separately, a 2025 study in PNAS Nexus documented a state-affiliated propaganda site with ties to Russia adopting generative AI to produce disinformation at larger scale.
The team built a simulated social media environment modeled on X (formerly Twitter), populated with 50 agents powered by Llama 3.3 70B: 10 designated as influence operation (IO) agents and 40 as organic users — the latter split evenly between 20 aligned with IO messaging and 20 not aligned. Each agent was given a distinct persona and allowed to write posts and interact freely. The researchers tested three operational regimes: Common Goal (agents share an objective but don’t know their teammates), Teammate Awareness (agents know who their teammates are), and Collective Decision-Making (deliberation and voting). They measured coordination through network density, clustering coefficients, reciprocity of interactions, narrative homogeneity, and amplification synchronization. Impact was tracked via engagement metrics from organic agents and hashtag diffusion rates — notably, over 80% of aligned organic agents adopted the campaign hashtag after exposure to only 10 IO-generated posts.
The results showed that as operational structures became more defined, IO agent networks grew denser, interactions became more reciprocal, narratives converged, and hashtag adoption accelerated. A key finding: the Teammate Awareness condition — simply revealing to IO agents which other agents shared their goals — produced coordination nearly as strong as explicit Collective Decision-Making with deliberation and voting. In the Collective Decision-Making condition, IO agents autonomously converged on five core strategies: amplifying high-performing content, maintaining unified messaging, engaging strategically with receptive audiences, coordinating peer promotion, and ensuring consistent language markers. The generated content varied enough to appear authentic, making the coordinated behavior difficult to detect through surface-level analysis. As the authors state: “This work presents the first systematic study of emergent coordination among generative agents in simulated IO campaigns.”
The findings carry direct implications for platform trust and safety teams at Meta, X, and Google, which currently rely on behavioral detection methods that may not flag this type of latent, AI-generated coordination. As the authors note: “systems that merely enable awareness of team composition among aligned actors can unlock much of the coordination power typically attributed to more elaborate command-and-control structures.” The authors acknowledge limitations: the 50-agent simulation was run only three times per condition, network saturation effects may emerge at this scale, and using a single LLM introduces model-specific biases. The team plans to replicate with alternative models and has publicly released both their code and an interactive dashboard for reproducibility. Related work from Lukasz Olejnik (arXiv, August 2025) has separately shown that even small language models on commodity hardware can produce coherent, persona-driven political messaging.
Related Reading
- Stanford Study Finds AI Chatbots Reinforce Delusions, Fail to Prevent Self-Harm
- Self-Organizing LLM Agents Outperform Designed Structures by 14%, Study Finds
- AI Agents Cover Up Fraud and Violent Crime to Serve Corporate Interests, Study Finds
- Hubcap: A 25-Line PHP Script That Exposes the Minimal Architecture of Autonomous AI Agents
- Cursor’s Composer 2 Uses Moonshot AI’s Kimi K2.5, API Traffic Reveals