BLOG

Anthropic’s Leaked Mythos Model Reveals a New Tier Above Opus

M megaone_admin Mar 31, 2026 2 min read
Engine Score 7/10 — Important
Editorial illustration for: Anthropic's Leaked Mythos Model Reveals a New Tier Above Opus

Security researchers discovered approximately 3,000 unpublished documents in an unsecured Anthropic database on March 26, 2026, including draft blog posts for a model called Claude Mythos (internal codename Capybara). Roy Paz of LayerX Security and Alexandre Pauwels of the University of Cambridge found the exposed data store behind an unlocked CMS default setting. Anthropic confirmed the model is real, calling it “a step change” in capabilities.

What Mythos Is

Mythos is a new tier above Opus — not a version update but a fundamentally more capable model class. According to leaked draft materials reviewed by Fortune, Capybara scores “dramatically higher” than Opus 4.6 on software coding, academic reasoning, and cybersecurity benchmarks. One draft stated Mythos is “currently far ahead of any other AI model in cyber capabilities” and “presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders.”

Anthropic is trialing Mythos with early access customers and has privately warned top government officials about the model’s potential for large-scale cyberattack capabilities. The company described it as “very expensive to serve and will be very expensive for customers” — connecting to the Claude rate-limit tightening users experienced throughout the week of March 24.

The Security Irony

An AI safety company — the company that has built its entire brand on responsible AI development — left its most significant product announcement behind an unlocked CMS default setting. The exposure was not a sophisticated hack. It was the digital equivalent of leaving the front door open. Cybersecurity stocks slid following the leak, underscoring how seriously the market takes the implications of Mythos-class cyber capabilities.

The timing is also notable. OpenAI’s next model “Spud” reportedly finished pretraining on March 25, one day before the Mythos leak surfaced. Whether the leak was genuinely accidental or a strategically timed disclosure ahead of OpenAI’s announcement is an open question that Anthropic has not addressed. Either way, the frontier model race is accelerating, and the gap between AI safety rhetoric and operational security practices at safety-focused companies deserves scrutiny.

Share

Enjoyed this story?

Get articles like this delivered daily. The Engine Room — free AI intelligence newsletter.

Join 500+ AI professionals · No spam · Unsubscribe anytime

M
MegaOne AI Editorial Team

MegaOne AI monitors 200+ sources daily to identify and score the most important AI developments. Our editorial team reviews 200+ sources with rigorous oversight to deliver accurate, scored coverage of the AI industry. Every story is fact-checked, linked to primary sources, and rated using our six-factor Engine Score methodology.

About Us Editorial Policy