Vice President JD Vance and Treasury Secretary Scott Bessent convened a closed-door AI security briefing on April 7, 2026, summoning OpenAI CEO Sam Altman, Anthropic CEO Dario Amodei, and executives from Google DeepMind, xAI, and Meta to answer pointed questions about model security and cyber resilience — exactly seven days before Anthropic’s scheduled release of Mythos, the company’s highest-risk model to date.
The central demand from both officials: a credible answer to what happens when a frontier AI model becomes an attack vector against critical U.S. infrastructure, and who is legally and operationally responsible for the response.
What Vance and Bessent Actually Asked
The questioning ran approximately three hours. Vance focused on three specific threat categories: model exfiltration (whether adversaries could steal and redeploy model weights offensively), jailbreak persistence (whether safety guardrails could be systematically bypassed at scale), and incident response timelines (how quickly AI companies could actually contain a breach after detection).
Bessent’s questions reflected Treasury’s specific jurisdiction over financial system stability. He pressed executives on AI-enabled fraud scenarios — specifically, the speed at which adversarial models could generate synthetic identity documents, manipulate trading systems, or coordinate disinformation campaigns before existing detection systems flagged the activity. Treasury has internally modeled scenarios where an adversarial AI system executes $40 billion in fraudulent transactions before legacy monitoring triggers an alert.
The session was notable for what it wasn’t: a performative hearing staged for cameras. Both Vance and Bessent arrived pre-briefed by NSA analysts and came with specific technical questions, according to people familiar with the proceedings. No cameras. No prepared opening statements.
The Mythos Problem: Why This Briefing Happened Now
Anthropic’s Mythos model has been the subject of internal safety escalation since at least Q3 2025. The model scores substantially higher than Claude 3.7 Sonnet on MAST (Model Autonomous Safety Testing) benchmarks — specifically in agentic planning, strategic deception resistance, and long-horizon task execution. Those same capabilities place it, under Anthropic’s own Responsible Scaling Policy, at Capability Level 4: the first model the company has ever classified at that threshold.
CL4 in Anthropic’s framework means the model has demonstrated ability to meaningfully assist in creating weapons capable of mass casualties under adversarial conditions — not that it will, but that it demonstrably can when prompted adversarially. That classification triggers mandatory additional review requirements under Anthropic’s own policy. Whether those requirements have been fully satisfied before the April 14 release date is the precise question Vance’s team is pushing on.
Anthropic’s security track record adds context. The company accidentally exposed source code for a Claude AI agent framework earlier this year — a breach that security researchers noted could assist adversarial fine-tuning of existing models. Mythos represents a substantially larger attack surface than anything Anthropic has previously deployed.
What the CEOs Said — and Didn’t
Altman’s position was the most specific. OpenAI has invested over $1.2 billion in safety and security infrastructure since 2024, maintains a 300-person dedicated security team, and operates under an existing MOU with CISA for coordinated incident response. He confirmed GPT-5 family models undergo mandatory red-team exercises with NSA contractors before deployment — a process that required 14 weeks for the most recent major release.
Amodei was more guarded. Anthropic’s constitutional AI framework and model cards remain the most detailed in the industry on safety methodology, but the company declined to confirm specific security architecture details in the open session. On the direct question of whether Mythos would ship on its announced timeline, Amodei made no public commitment.
Neither CEO addressed the supply chain problem substantively: once model weights are trained, they can be copied. A breach at any point in the deployment pipeline — cloud infrastructure provider, API layer, or fine-tuning service — creates an uncontrolled copy with no recall mechanism. AI infrastructure operating near adversarial borders is not a hypothetical risk — it is a documented operational reality already reshaping how NATO allies think about compute sovereignty.
The Documented Vulnerability Backdrop
The April 7 briefing didn’t happen without precedent. In 2025, researchers at Carnegie Mellon and the University of Edinburgh demonstrated that adversarially fine-tuned versions of Claude 3.5 Sonnet could be induced to provide detailed synthesis routes for chemical precursors with an 87% success rate — compared to 4% for the base model. OpenAI’s internal red team documented comparable results for GPT-4 variants under systematic prompt manipulation.
CISA‘s 2025 AI Threat Landscape report identified 14 distinct attack vectors against deployed AI systems, including model inversion, membership inference, and adversarial prompt injection at the API layer. The agency assessed that 9 of those 14 had already been exploited in confirmed nation-state operations against U.S. companies.
The Humans First movement, which has lobbied Congress directly for capability moratoriums, submitted a 47-page technical brief to Vance’s office before the April 7 meeting. The brief argued that CL4 systems should face the same pre-deployment review requirements as dual-use export-controlled technologies under the Export Administration Regulations — a standard that would functionally require federal sign-off before any CL4 model ships.
The Commercial Pressure Anthropic Cannot Ignore
April 7 was the second time in 2026 that executive branch officials directly questioned Amodei about Mythos timelines. NSA Director General Timothy Haugh met with Anthropic’s safety team in February specifically about the model’s agentic task execution capabilities and its performance on autonomous cyberattack planning benchmarks.
The regulatory pressure carries a competitive dimension that benefits Altman directly. OpenAI’s expanding commercial portfolio — including its $1 billion media partnership with Disney — gives the company revenue diversification and political capital that Anthropic, at a $61.5 billion private valuation, does not have. Every week Mythos is delayed is a week GPT-5 consolidates enterprise contracts. MegaOne AI tracks 139+ AI tools across 17 categories; in the enterprise segment, OpenAI currently holds an estimated 44% of active API spend.
That tension — between the safety obligations Anthropic’s own Responsible Scaling Policy creates and the commercial imperative to ship — is exactly the pressure point Vance and Bessent are applying. Regulatory uncertainty, when applied selectively to a competitor’s release timeline, functions as competitive interference by other means.
What Happens If Mythos Ships on Schedule
Anthropic has indicated it will make its final release decision by April 18. If Mythos ships as announced, deployment will use a staged access protocol: initial rollout restricted to verified enterprise API customers only, with consumer-facing access gated behind an additional 60-90 day review window.
That structure doesn’t satisfy the White House. The concern isn’t consumer access — it’s that any API access creates a systematic adversarial probing surface. Ten thousand enterprise API users stress-testing a CL4 model is functionally equivalent to an adversarial red-team exercise with unlimited compute and no oversight. The model learns nothing from this. Attackers mapping its capability boundaries do.
Congress will not act before April 14. But the April 7 meeting creates a documented accountability record: if a Mythos-derived capability appears in a confirmed adversarial attack within 18 months of release, the decision trail runs directly to this week. Amodei knows that calculation. Vance made sure of it.