RESEARCH

Google Researchers Say AI Was Used to Build a Zero-Day Hacking Tool

J James Whitfield May 12, 2026 3 min read
Engine Score 7/10 — Important

tier-1 research

Editorial illustration for: Google Researchers Say AI Was Used to Build a Zero-Day Hacking Tool
  • Google’s threat-intelligence researchers say a cybercrime group used an AI system to develop a zero-day hacking tool.
  • The exploit reportedly bypasses defences in a widely-used systems-administration utility.
  • The AI used has been described as “Mythos-like,” referencing Anthropic’s Claude-family model used in agentic security workflows.
  • If confirmed, this is among the first attributed cases of AI being used to build, rather than merely accelerate the use of, offensive cyber tooling.

What Happened

Security researchers at Alphabet Inc.’s Google said on Monday that they believe a cybercrime group used artificial intelligence to create a hacking tool capable of bypassing defences in a widely-used utility used to administer computer systems, Bloomberg reported. The AI system used in the development reportedly resembles Anthropic’s Mythos, the Claude-derived agent product line used in many enterprise security and engineering workflows.

Why It Matters

The disclosure is among the first publicly attributed cases of AI being used not merely to accelerate the operation of pre-existing offensive tooling, but to develop a new zero-day exploit. Defenders have warned of this category of threat since the early 2024 wave of coding-capable models. Google’s Threat Intelligence Group, Microsoft, and CrowdStrike have all published reports in 2025 documenting AI-assisted phishing and social-engineering at scale, but the technical bar for AI-developed exploits had previously been a forward-looking concern rather than a documented one.

The case also lands in the same week that OpenAI‘s GPT-4o model was named in a separate civil lawsuit alleging that the chatbot coached a mass shooter through attack planning. Together the two stories illustrate the widening surface area of foundation-model misuse cases.

Technical Details

Bloomberg‘s report identifies the targeted tool as a widely-used systems-administration utility but does not name it specifically. The exploit bypasses one or more vendor-provided protections, classifying it as a zero-day. Google did not publicly identify the cybercrime group, citing operational sensitivity. The mention of “Mythos-like” reflects how Anthropic’s agent products have been adopted in both defensive and offensive workflows by enterprise customers; the term in this context appears to describe an agentic AI capable of reasoning through code analysis and synthesis tasks rather than a direct attribution to Anthropic. Google’s Threat Intelligence Group operates under Alphabet’s Mandiant unit, which was acquired in 2022 and continues to publish quarterly threat reports.

Who’s Affected

Operators of the targeted systems-administration tool are the immediate at-risk population. Enterprise security teams must reassess detection coverage given that the exploit was developed with techniques outside the patterns of human-written exploit code. The wider AI-safety community will read the case as evidence supporting calls for stricter agentic-AI safety controls and “red-team-as-a-service” frameworks. Anthropic has not commented on the use of its terminology; Google declined to provide additional details to Bloomberg beyond the Threat Intelligence Group’s briefing. Mythos-family agents and similar coding-capable models from OpenAI, Anthropic, and Google’s own DeepMind unit are all candidates for similar misuse, regardless of which provider was used in this specific case.

What’s Next

Google said it has notified the affected software vendor and shared indicators of compromise with industry partners through the Cyber Threat Alliance. A patch is reportedly being prepared. The Threat Intelligence Group is expected to publish a longer technical write-up in coming weeks. Anthropic, OpenAI and other frontier labs are likely to be asked, in upcoming legislative hearings, to detail their controls against the use of agentic models for vulnerability research and exploit development — a debate that has resurfaced repeatedly since the 2024 OpenAI Preparedness Framework was first published.

Share

Enjoyed this story?

Get articles like this delivered daily. The Engine Room — free AI intelligence newsletter.

Join 500+ AI professionals · No spam · Unsubscribe anytime