Jamie Dimon, Chief Executive of JPMorgan Chase & Co. (NYSE: JPM), confirmed at the bank’s Q1 2026 earnings call on April 11 that JPMorgan is actively testing Anthropic’s Mythos — the frontier AI model Anthropic withheld from general release after internal safety evaluations identified thousands of high-severity software vulnerabilities. The disclosure places the world’s largest bank by assets inside Project Glasswing, Anthropic’s gated enterprise deployment program for Mythos, and makes JPMorgan the most prominent institution yet publicly confirmed in the program.
Dimon’s words were precise: “It does create additional vulnerabilities and, maybe down the road, better ways to strengthen itself too. While we’re trying to get the benefits of AI, we also are very cognizant of the risk of cyber. I think the government is aware of it too.” That is not boilerplate CEO hedging. It is a description of a dual-use system — one capable of both discovering vulnerabilities and, in Dimon’s framing, eventually auto-remediating them — running inside an institution that processes $10 trillion in daily transactions.
What Dimon Actually Said at the Q1 2026 Earnings Call
JPMorgan Anthropic Mythos Dimon coverage has circulated at the rumor level for weeks. The April 11 earnings call put it on the record. Dimon confirmed the Mythos testing in the context of cybersecurity risk, which he said AI has made “worse” and “harder to defend against” — even as he maintained JPMorgan remains “well protected.”
The specific phrase “better ways to strengthen itself too” is load-bearing. It is not a generic AI productivity claim. It aligns precisely with Anthropic’s own documentation of Mythos’s code-architecture analysis capabilities — a model that can examine a system’s own security posture and generate remediation proposals. Dimon placed that capability in the future tense deliberately, signaling it is under evaluation, not yet deployed operationally.
The earnings call also included a government awareness signal. “I think the government is aware of it too” is not a throwaway line from a CEO who chooses every public word carefully. JPMorgan is a systemically important financial institution (SIFI) whose AI deployments are subject to oversight from the Financial Stability Oversight Council (FSOC), the Office of the Comptroller of the Currency (OCC), and, in the context of frontier AI, national security review bodies.
The Model That Anthropic Decided Was Too Dangerous to Release Publicly
Mythos’s delayed general release was not a standard staged rollout. Anthropic’s internal safety evaluations found the model autonomously identifying thousands of high-severity software vulnerabilities across test environments — a capability that, without proper sandboxing, could accelerate offensive cyberattacks as readily as it defends against them.
Anthropic has navigated frontier model sensitivity carefully before. The company’s history with unintended AI system exposure adds relevant context: each incident sharpened internal protocols around what gets released, when, and to whom. Mythos represents the logical end of that tightening — not a model that leaked, but one Anthropic actively chose to restrict behind institutional guardrails rather than risk broad availability of its most sensitive capability.
The vulnerability-detection function is a core design feature, not a side effect. Mythos was trained extensively on code-security data and benchmarked against red-team adversarial scenarios. Anthropic’s decision to gate it mirrors the logic behind dual-use export controls: the technology is too capable to make broadly available before operational safeguards are proven at scale.
Project Glasswing: How Anthropic’s Gated Frontier Program Works
Project Glasswing provides restricted API access to Mythos under a set of binding operational constraints. Participants operate within sandboxed deployment environments, mandatory output logging, human-in-the-loop requirements for defined output categories, and regular security audits conducted by Anthropic’s trust and safety team.
Selection criteria for Glasswing favor three institutional characteristics: existing enterprise AI governance frameworks capable of managing frontier model outputs; regulatory compliance infrastructure aligned with applicable model risk management guidance; and strategic value to Anthropic’s stated mission of demonstrating that powerful AI can be deployed responsibly in high-stakes environments.
JPMorgan checks all three. The bank spent approximately $17 billion on technology in fiscal 2025 — a figure that includes a mature AI governance function built partly in response to OCC supervisory guidance on model risk. That pre-existing infrastructure makes JPMorgan a credible Glasswing operating partner, not just a prestigious name on a participant list.
What Testing Mythos Actually Means Inside JPMorgan’s Operations
JPMorgan’s Mythos testing almost certainly falls within cybersecurity and software engineering, not client-facing products. The vulnerability-detection and code-analysis capabilities Dimon described map directly onto internal use cases:
- Automated code auditing across proprietary trading, risk, and settlement systems
- Penetration testing simulation against internal infrastructure
- Threat modeling for AI-augmented adversarial attack vectors
- Regulatory compliance scanning across software deployment pipelines
Each of these operates in a sandboxed context where Mythos outputs are reviewed rather than auto-executed — consistent with Glasswing’s human-in-the-loop constraints. The bank’s Chief Information Officer, Lori Beer, has led JPMorgan’s technology scaling through successive AI adoption cycles: large language models for legal document review, an internal AI assistant deployed to all 300,000+ employees by 2025, and now Mythos evaluation under Glasswing protocols. The operational pattern is consistent — early, structured, internal governance before any client-facing deployment.
The Cybersecurity Paradox at the World’s Largest Bank
JPMorgan spends over $600 million annually on cybersecurity. That figure reflects the scale of its threat surface: the bank operates across 60+ countries, maintains critical payment infrastructure, and has been a documented target of nation-state threat actors since at least the 2014 breach that exposed data on 83 million accounts — the largest financial services hack in U.S. history at the time.
The adversarial AI threat is not hypothetical. The Cybersecurity and Infrastructure Security Agency (CISA) has documented nation-state actors targeting financial infrastructure using AI-augmented vulnerability discovery tools. A bank testing Mythos is running the same class of capability its adversaries may eventually acquire — using it defensively, before it can be used offensively against them. Dimon’s “well protected” claim is the institutional version of that logic.
The broader pattern of AI infrastructure buildout intensifies the stakes. As sovereign AI infrastructure investment accelerates globally, including in regions with direct geopolitical sensitivity, the concentration of frontier AI capabilities inside financial institutions raises systemic questions that no single bank’s security posture can fully answer.
The Fortune 10 Cohort That Is Likely Already Inside Glasswing
JPMorgan is almost certainly not Anthropic’s only Glasswing partner. The program’s selection logic — responsible frontier deployment in high-trust, high-stakes institutional environments — maps cleanly onto a specific class of enterprise customer.
Based on Anthropic’s known commercial relationships and the Glasswing selection criteria, the likely participant cohort spans four sectors:
- Financial infrastructure: institutions operating payment rails, clearing systems, or SIFI-designated banking functions
- Defense and intelligence contractors: organizations with cleared-facility AI deployment experience and existing federal security frameworks
- Cloud hyperscalers: providers with infrastructure to sandbox Mythos at operational scale — Amazon Web Services, which has committed $4 billion to Anthropic, is the obvious candidate
- Critical infrastructure operators: energy grid, telecom, and logistics companies facing AI-augmented threat escalation
The competitive pressure is real. The ongoing consolidation across the AI industry’s major players creates direct incentive for Anthropic to demonstrate institutional Mythos deployments before OpenAI or Google deploy comparable frontier capabilities in enterprise environments. JPMorgan’s public confirmation accelerates that proof-of-concept timeline considerably.
What the Government Awareness Comment Actually Signals
Dimon’s comment about government awareness is the most consequential sentence in his Mythos remarks, and it is worth reading carefully. “I think the government is aware of it too” does not mean regulators have been briefed on a general AI risk concept. It means JPMorgan is almost certainly engaged in active dialogue with federal oversight bodies about a specific frontier AI deployment at a specific SIFI.
The 2023 Executive Order on AI safety established reporting requirements for frontier models with potential national security implications. The successor framework, formalized in early 2026, added financial stability provisions covering AI deployments at systemically important institutions. A SIFI testing a frontier model with autonomous vulnerability-detection capabilities falls within that reporting scope. Dimon saying “the government is aware” at an earnings call — in front of analysts, regulators, and the official transcript — is not a casual observation. It is deliberate public documentation of regulatory coordination.
The public conversation about AI risk has largely focused on labor displacement and content integrity. The broader pushback against unchecked AI deployment has centered on those vectors. JPMorgan-Glasswing shifts the frame toward a more immediate and specific concern: frontier AI capabilities concentrated inside financial infrastructure, with cybersecurity implications that extend to every institution connected to the global payments system.
The Operational Picture
JPMorgan testing Mythos inside Project Glasswing is not a pilot. It is a production-track evaluation by an institution with the governance infrastructure to handle frontier AI outputs and the strategic incentive to get ahead of AI-augmented cybersecurity threats before its adversaries do. Dimon’s framing — acknowledge the risk, assert the protection, reference federal coordination — is the template other Glasswing participants will follow when their own confirmations come.
MegaOne AI tracks 139+ AI tools across 17 categories, including frontier model deployments across enterprise and financial services. Anthropic’s Mythos and Project Glasswing represent the current leading edge of institutional frontier AI adoption. JPMorgan’s public confirmation moves the story from informed speculation to official record. The question now is not whether other Fortune 10 institutions are inside the program — it is which confirms next, and whether Washington decides that confirmation should remain voluntary.