- Google signed a contract granting the U.S. Department of Defense access to its AI models for classified tasks on the same day more than 600 employees sent an open letter opposing the deal.
- Legal experts say the contract’s safety provisions against domestic mass surveillance and autonomous weapons carry no legal weight under the agreement’s explicit terms.
- Unlike OpenAI, which retained full control over its safety stack in a February 2026 Pentagon deal, Google agreed to adjust its AI safety filters at the government’s request.
- Google quietly dropped its 2018 self-imposed restrictions on using AI for weapons or surveillance last year, reversing a pledge made after the original Project Maven controversy.
What Happened
Google signed a contract with the U.S. Pentagon granting the Department of Defense access to its AI models for classified tasks, according to The Decoder, citing original reporting from The Information. The deal authorizes Pentagon use of Google AI for “any lawful government purpose.” On the same day the contract was signed, more than 600 Google employees — many from the company’s DeepMind AI research lab — sent an open letter to CEO Sundar Pichai urging him to reject all classified collaboration with the Defense Department.
“We want to see AI benefit humanity; not to see it being used in inhumane or extremely harmful ways,” the employees wrote, according to the Washington Post. The letter argued that classified contracts make it structurally impossible for Google’s own representatives to monitor how the technology is being deployed: “The only way to guarantee that Google does not become associated with such harms is to reject any classified workloads.”
Why It Matters
The contract marks a significant reversal of Google’s publicly stated position from 2018, when thousands of employees protested the company’s Project Maven contract — a military drone targeting program — and Google subsequently pledged never to use AI for weapons or surveillance. The company quietly dropped those self-imposed restrictions last year.
In February 2026, Anthropic was excluded from a comparable classified Pentagon deal after demanding contractual guarantees against mass surveillance and autonomous weapons; Anthropic is currently suing over that decision. Earlier in 2026, more than 900 Google employees had publicly called on their own company to support Anthropic’s red lines on those same issues.
Technical Details
The contract includes language stating the AI system “is not intended for domestic mass surveillance or autonomous weapons without appropriate human oversight.” However, the same document explicitly states that the agreement “does not confer any right to control or veto lawful Government operational decision-making,” according to The Information.
Charlie Bullock, a lawyer and Senior Research Fellow at the Institute for Law and AI, concluded that the phrasing “is not intended for, and should not be used for” carries no legal weight — it signals that such use would be unwelcome, but would not constitute a breach of contract. Amos Toh, from NYU’s Brennan Center, added that “appropriate human oversight” does not necessarily require a human intermediary between target identification and a fire order, and noted that the Pentagon has not ruled out fully autonomous weapons systems.
Google’s terms give the government more operational latitude than OpenAI’s February 2026 Pentagon agreement, in which OpenAI retained full control over its “Safety Stack.” Google, by contrast, committed to helping the government adjust its AI safety filters upon request, The Information reported. Elon Musk’s xAI also holds a classified AI contract with the Pentagon.
Who’s Affected
More than 600 Google employees — a smaller number than the 900-plus who publicly objected earlier in 2026 to Google’s stance on Anthropic’s exclusion — have gone on record opposing the contract. A Google Public Sector spokesperson framed the new deal as an extension of an existing November agreement, reiterating the company’s stated opposition to AI use for “domestic mass surveillance or autonomous weaponry without appropriate human oversight” — language legal experts have assessed as unenforceable.
Anthropic is directly implicated in the broader landscape: Project Maven, the original controversy that prompted Google’s 2018 pledge, now operates under Palantir and has reportedly used Anthropic’s Claude model for target selection in the Iran conflict. Anthropic’s ongoing lawsuit challenging its Pentagon exclusion could affect the contractual terms available to all AI companies in future defense deals.
What’s Next
Google employees who signed the open letter have not publicly announced further action. The company’s spokesperson reiterated principles that legal experts have already assessed as contractually unenforceable under the signed agreement’s explicit terms.
Anthropic’s lawsuit against the Pentagon over its February 2026 exclusion is ongoing, and a ruling could determine whether AI companies can legally require safety guarantees in classified defense contracts — a question the Google deal leaves unresolved.