BLOG

Sam Altman Says He Tried to Save Anthropic — Then Accused Its CEO of Undermining Him for Years

M MegaOne AI Apr 2, 2026 5 min read
Engine Score 7/10 — Important
Editorial illustration for: Sam Altman Says He Tried to Save Anthropic — Then Accused Its CEO of Undermining Him for Years
  • Internal Slack messages reviewed by Axios show Sam Altman told OpenAI staff he was trying to “save” Anthropic as the company’s Pentagon negotiations collapsed in late February 2026.
  • At the same time, Altman privately complained that Anthropic CEO Dario Amodei had spent years trying to undermine him — while OpenAI moved to capture the military contract Anthropic had just lost.
  • The Pentagon designated Anthropic a “supply chain risk” — the first time the U.S. government has applied that label to an American company — after talks broke down over Anthropic’s refusal to allow autonomous weapons use and mass domestic surveillance.
  • Anthropic has sued the Trump administration to overturn the designation, and a federal judge has since blocked it, calling the move “classic” First Amendment retaliation.

What Happened

On March 26, 2026, Axios reported on a series of internal OpenAI Slack messages sent by CEO Sam Altman between February 24 and March 2, as the Pentagon’s dispute with Anthropic escalated into a public standoff. The messages show Altman positioning himself as a mediator, telling staff he was working to “save” Anthropic from a damaging federal blacklisting — even as OpenAI was simultaneously pursuing the same defense contract Anthropic was losing.

On February 26, Altman sent an all-staff note saying OpenAI shared Anthropic’s stated red lines and wanted to help de-escalate. He acknowledged the optics “may not look good in the short term” but asked staff to understand the nuance. As the Pentagon’s deadline approached on February 27, Altman relayed to a core group that the Defense Department believed it could offer Anthropic an off-ramp from the supply chain risk designation — but that Claude was already so embedded across intelligence agencies that the specific carve-out OpenAI had secured could not be extended to Anthropic. Altman told staff he hadn’t thought of that complication in advance.

In the same messages, Altman privately vented that he found it “strange” to be working so hard to save a rival whose CEO had, in his view, spent years trying to destroy OpenAI. Anthropic CEO Dario Amodei disputed Altman’s account sharply. In a leaked internal memo obtained before Altman’s messages surfaced, Amodei called OpenAI’s framing “mendacious” and described several of Altman’s public statements as lies. Amodei later apologized for the tone of that memo.

Why It Matters

The leaked messages expose a dynamic that has long shaped the AI industry but rarely surfaced publicly: the personal dimension of the OpenAI-Anthropic rivalry. Altman’s framing — that he was acting as a peacemaker — sits in direct tension with the outcome, in which OpenAI moved quickly to secure the Pentagon contract hours after Anthropic was designated a supply chain risk.

The dispute also raises questions about how personal grievances between founders shape institutional decisions affecting the entire AI sector. Both companies are developing frontier AI systems, setting safety policies, and lobbying regulators. When their CEOs are simultaneously rivals, former colleagues, and mutual accusers, those decisions carry baggage that goes beyond technical or commercial logic.

The Pentagon episode has also sharpened the competitive picture. On the enterprise side, Anthropic now captures approximately 40% of enterprise LLM spending, up from 24% a year earlier, while OpenAI’s share has fallen to around 27%. Whether the supply chain risk designation — even if overturned — will deter federal agencies from relying on Claude remains an open question.

Technical Details

The core technical dispute between Anthropic and the Pentagon centered on two specific use restrictions. Anthropic required that Claude not be used for mass surveillance of U.S. citizens and that it not be used for autonomous weapons systems — defined as lethal systems that can select and engage targets without meaningful human control. The Pentagon responded that it could not allow a private company to restrict how it uses AI tools in a national security emergency, and it demanded access for “all lawful purposes.”

OpenAI ultimately agreed to the same two restrictions in its Pentagon contract, a point Sam Altman highlighted publicly after the deal was announced. However, the OpenAI contract was also reported to include additional surveillance protections negotiated after the Anthropic fallout became public, according to a subsequent Axios report.

Separately, CNBC reported that Claude had already been used across U.S. intelligence operations, including target identification and intelligence assessments in the ongoing U.S.-Iran conflict — functions that predate and sit uneasily alongside Anthropic’s stated red lines. The supply chain risk designation, per the Pentagon’s own clarifications, applied narrowly to direct DoD contracts and could not block existing or non-DoD uses of Claude.

Who’s Affected

Anthropic lost a $200 million Pentagon contract as a direct result of the breakdown. The company then filed two separate lawsuits against the Defense Department and the Trump administration — one in the Northern District of California and one in the D.C. Circuit Court of Appeals, targeting different statutory authorities the government cited. On March 24, a federal judge pressed the government on the low threshold used to apply the designation. The Hill subsequently reported that the court blocked the designation, with U.S. District Judge Rita Lin describing the Trump administration’s action as First Amendment retaliation against Anthropic’s public advocacy positions.

OpenAI gained a high-value defense contract and positioned itself as the primary AI vendor to U.S. military and intelligence agencies. But the deal has not been without scrutiny. The Intercept reported that OpenAI’s surveillance commitments in the contract rely heavily on internal trust, with limited external verification mechanisms.

The broader AI startup ecosystem has also been affected. TechCrunch noted that the first-ever supply chain risk designation against an American company could chill other AI startups from pursuing Defense Department work if they believe safety-related contract negotiations could result in federal blacklisting.

What’s Next

Anthropic’s legal challenge to the supply chain risk designation is advancing through federal court, with the injunction currently blocking the label from taking effect. The Trump administration has a window to seek emergency relief from the appeals court. The outcome will determine whether AI companies can negotiate safety-based use restrictions in government contracts without risking retaliatory designations.

On the commercial side, Epoch AI has projected that Anthropic’s annualized revenue growth could allow it to surpass OpenAI by mid-2026, driven largely by enterprise adoption. Whether the Pentagon dispute accelerates or disrupts that trajectory will depend partly on how government agencies interpret the ongoing litigation and whether other federal departments follow the DoD’s lead in distancing themselves from Claude.

Axios has also reported separately that there are active discussions about how Anthropic’s Pentagon deal could be revived. Any such revival would likely require either a change in the administration’s posture toward Anthropic’s use restrictions, or a modification of Anthropic’s own contract terms — a move the company has so far resisted publicly. Watch for court rulings in April 2026 and any further internal communications from either company that may surface as the litigation proceeds.

Share

Enjoyed this story?

Get articles like this delivered daily. The Engine Room — free AI intelligence newsletter.

Join 500+ AI professionals · No spam · Unsubscribe anytime

M
MegaOne AI Editorial Team

MegaOne AI monitors 200+ sources daily to identify and score the most important AI developments. Our editorial team reviews 200+ sources with rigorous oversight to deliver accurate, scored coverage of the AI industry. Every story is fact-checked, linked to primary sources, and rated using our six-factor Engine Score methodology.

About Us Editorial Policy