- Australia’s central bank is actively monitoring Anthropic’s Mythos AI model after Anthropic itself assessed the system as powerful enough to enable sophisticated cyberattacks, Bloomberg reported April 22, 2026.
- Anthropic’s own characterization of Mythos AI aligns with the company’s published responsible scaling policy, which mandates enhanced safeguards for models that provide meaningful “uplift” to threat actors.
- The RBA’s engagement marks a notable instance of a financial regulator formally scrutinizing a specific AI model’s offensive cybersecurity capabilities.
- No regulatory action has been announced; the RBA’s current posture is described as active monitoring.
What Happened
The Reserve Bank of Australia is monitoring Anthropic PBC’s Mythos AI model over concerns that the system is capable of facilitating sophisticated cyberattacks, Bloomberg reported on April 22, 2026. The disclosure originated with Anthropic itself: according to Bloomberg, the company stated that Mythos AI is “powerful enough to enable sophisticated cyberattacks” — an assessment that prompted the central bank’s attention.
Why It Matters
Anthropic’s publicly documented responsible scaling policy (RSP) establishes capability thresholds that require additional safety measures when a model is assessed as providing meaningful uplift to cyberattackers, bioweapon developers, or other threat actors. Mythos AI’s classification by Anthropic as cyberattack-capable suggests the model met or approached one of those thresholds. Central banks have historically focused AI oversight on financial fraud and systemic stability risk; the RBA’s attention to offensive AI capability represents a broadening of that regulatory scope.
Technical Details
Anthropic’s RSP distinguishes between models that can discuss offensive cybersecurity techniques at an informational level and those assessed as providing operational uplift — meaning the ability to materially accelerate an attacker’s capabilities beyond what freely available resources permit. The company’s framework places models in the latter category under heightened deployment restrictions, including restricted API access and enhanced monitoring. Bloomberg’s reporting indicates Anthropic placed Mythos AI in the higher-risk classification, though the specific red-team evaluation methodology, benchmark scores, or whether Mythos AI has been released publicly were not detailed in the available source material.
Who’s Affected
Australian financial institutions and critical infrastructure operators face the most immediate regulatory exposure, as RBA monitoring could translate into guidance restricting deployment of Anthropic’s systems in sensitive environments. Enterprise customers using Anthropic’s API — particularly those in sectors subject to Australian Prudential Regulation Authority oversight — would be subject to any access limitations or compliance requirements that follow from the RBA’s review.
What’s Next
The RBA has not disclosed specific metrics that would trigger formal regulatory action, and no timeline has been announced as of April 22, 2026. Whether the central bank will coordinate with the Australian Signals Directorate — the national cybersecurity authority — or engage Anthropic in formal dialogue has not been reported. Anthropic had not issued a public statement in response to the RBA’s monitoring activities as of the time of Bloomberg’s report.