- OpenAI is reportedly developing a new cybersecurity AI model restricted to a select group of companies, according to Axios as reported by The Decoder on April 9, 2026.
- Anthropic restricted its Mythos Preview model on April 8, 2026, to select tech and security firms because of its demonstrated offensive hacking capabilities.
- OpenAI’s “Trusted Access for Cyber” pilot, launched in February 2026 alongside GPT-5.3-Codex, provides approved participants with $10 million in API credits for defensive security work.
- Anthropic has ruled out a public release of Mythos Preview until adequate safety guardrails are in place, with no timeline specified.
What Happened
OpenAI is reportedly developing a new AI model with advanced cybersecurity capabilities that will be restricted to a select group of companies, according to a report published April 9, 2026 by The Decoder, citing Axios. The move follows Anthropic’s announcement on April 8, 2026, when the company restricted access to its Mythos Preview model to a limited set of tech and security firms, citing the model’s powerful offensive hacking capabilities as the basis for the restriction. OpenAI has not publicly confirmed the new model’s existence, and no access terms or eligibility criteria have been disclosed.
The reported development extends a restricted-access approach OpenAI began in February 2026 with its “Trusted Access for Cyber” pilot program, launched alongside GPT-5.3-Codex, which the company described as its most capable cybersecurity model at the time of release. The two companies now appear to be converging on controlled deployment as the default posture for frontier cybersecurity AI.
Why It Matters
The parallel restrictions from OpenAI and Anthropic reflect a shared assessment among leading AI labs that frontier cybersecurity models — those capable of automating offensive hacking operations — require different deployment frameworks than general-purpose AI systems. OpenAI had signaled this position in February 2026, when it restricted its most capable cybersecurity capabilities to a vetted group of organizations focused on defensive security work, rather than making GPT-5.3-Codex fully available through standard API access.
Anthropic’s Mythos Preview was restricted immediately upon any external availability, suggesting the capability threshold triggering access controls is rising alongside model performance rather than stabilizing at a fixed benchmark. Anthropic characterized Mythos Preview as belonging to a model class that presents distinct risks if broadly deployed.
Technical Details
OpenAI’s “Trusted Access for Cyber” pilot program, launched in February 2026, grants approved participants access to powerful cybersecurity models for defensive security work, backed by $10 million in API credits per the program’s disclosed terms. GPT-5.3-Codex, the model central to the program at launch, was described by OpenAI as its most capable cybersecurity-focused model to date at the time of its release.
Anthropic restricted Mythos Preview specifically because of its demonstrated offensive hacking capabilities — a distinction the company drew explicitly when announcing the access controls. Anthropic stated that “models in the Mythos class won’t ship until adequate safety guardrails are in place,” according to The Decoder’s reporting, ruling out any public release on an indeterminate timeline. OpenAI has not indicated whether its forthcoming model will carry a similar indefinite hold on public access or whether broader availability is planned for a later stage.
Who’s Affected
Security companies and enterprise technology firms seeking access to frontier cybersecurity AI capabilities are the primary groups affected by these restrictions. Smaller organizations, independent security researchers, and individual developers are excluded from both Anthropic’s Mythos Preview and OpenAI’s reported new model under current access frameworks, with no public pathway for access established by either company.
Organizations already enrolled in OpenAI’s “Trusted Access for Cyber” pilot retain access to GPT-5.3-Codex and related capabilities under existing program terms. The broader security research community — including academic researchers and independent red teams — remains outside the access perimeter of both companies’ most capable cybersecurity models.
What’s Next
Anthropic has ruled out a public release of Mythos Preview without specifying a timeline for when it will consider safety guardrails sufficient for broader deployment. OpenAI has not publicly confirmed the new model described in the Axios report, leaving any program’s structure, eligibility criteria, and access timeline undefined.
The alignment between two major AI labs on restricted deployment for high-capability cybersecurity AI may signal a broader industry shift in how such models are released, though neither company has published criteria for determining which capabilities trigger access controls or how those criteria might evolve.