The bombing of the Shajareh Tayyebeh primary school in Minab, Iran on February 28, 2026 — which killed between 175 and 180 people, mostly girls aged seven to twelve — was caused by an outdated Defense Intelligence Agency database entry, not by an AI chatbot selecting the target. A detailed investigation reveals that the building had been classified as a military facility in a database that had not been updated to reflect its conversion into a school, a change that satellite imagery shows occurred by 2016 at the latest.
Public discourse focused almost entirely on whether Claude, Anthropic’s chatbot, had selected the school as a target. Congress wrote to Defense Secretary Pete Hegseth about AI use in the strikes, and media coverage centered on questions about LLM reliability in military contexts. This framing obscured the actual targeting infrastructure: Palantir’s Maven Smart System, which integrates satellite imagery, signals intelligence, and sensor data to carry targets from detection through to strike authorization.
Maven’s history traces back to Project Maven, established in April 2017 as the Algorithmic Warfare Cross-Functional Team. Google originally held the contract but abandoned it in 2018 after more than 4,000 employees signed a letter opposing the company’s involvement in Pentagon targeting systems. Palantir took over and spent six years building Maven into the military’s primary targeting pipeline. By the start of the Iran operation, Maven had become embedded infrastructure — invisible enough that public attention fixated on a chatbot that had nothing to do with the targeting chain.
The distinction matters because the two technologies pose fundamentally different risks. An LLM hallucinating a target is a theoretical concern that invites debates about alignment and personality. A targeting system operating on a stale database is a bureaucratic failure that was entirely preventable through routine data maintenance. The speed that Maven brings to the kill chain — compressing the sequence from detection to strike — made the database error lethal in a way it might not have been under slower manual processes.
The incident illustrates what researcher Morgan Ames calls a “charismatic technology” effect: LLMs have become so dominant in public discourse about AI that they absorb attention and attribution even in contexts where they played no role. The actual infrastructure that enabled the strike — mature, well-funded, politically connected data analytics platforms — operated below the threshold of public scrutiny while a chatbot took the blame.
