- RSAC 2026 opened with the declaration that “the AI security crisis has arrived,” making AI agent security the central theme of the world’s largest cybersecurity event.
- Prompt injection, agent hijacking, and data exfiltration via Model Context Protocol (MCP) were identified as the three most urgent AI-specific threat vectors.
- A survey presented at the conference found that 78% of enterprises have deployed AI agents in production, but only 12% have implemented dedicated AI security controls.
- Multiple vendors announced AI-specific security products, signaling a new market category estimated at $8.5 billion by 2028.
What Happened
The RSA Conference 2026, held in San Francisco from April 1-4, opened with a keynote declaring that “the AI security crisis has arrived.” RSAC Chair Hugh Thompson framed the conference theme around what he called “the most dangerous gap in enterprise security today” — the chasm between how fast organizations are deploying AI agents and how slowly they are securing them.
The declaration marks a shift from previous years where AI security was a side conversation. In 2026, it is the main stage.
Why It Matters
The timing is not accidental. Over the past 18 months, AI agents have moved from research demos to production infrastructure. Companies are deploying autonomous agents that can browse the web, execute code, manage databases, and interact with third-party APIs — often with broad permissions and minimal oversight. The Gartner AI Agent Adoption Report published in March 2026 estimated that 78% of enterprises now run at least one AI agent in production, up from 22% in early 2025.
What has not kept pace is security. Thompson noted during his keynote that “we are building autonomous systems with the security posture of a 2019 SaaS app,” adding that traditional application security frameworks were not designed for systems that make their own decisions.
Technical Details
Three threat vectors dominated the RSAC 2026 agenda. The first is prompt injection, where attackers embed malicious instructions in data that AI agents process — emails, documents, web pages — to hijack agent behavior. Research presented by Johann Rehberger of Embrace The Red demonstrated a prompt injection chain that exfiltrated customer data from a production CRM agent in under 90 seconds across four major agent frameworks.
The second vector is agent hijacking, where attackers exploit the tool-calling capabilities of AI agents to perform unauthorized actions. A live demonstration showed an agent with calendar access being manipulated into sending meeting invitations containing credential-harvesting links, all triggered by a single poisoned email in the agent’s context window.
The third and most debated threat is data exfiltration via Model Context Protocol (MCP). MCP, the open standard developed by Anthropic for connecting AI models to external tools, has seen rapid adoption. But security researchers at RSAC presented findings showing that MCP servers can be configured — or compromised — to silently relay sensitive data. Trail of Bits researcher Dan Guido presented an analysis of 147 public MCP server implementations and found that 63% lacked authentication, 41% had no input validation, and 89% logged no audit trail of tool invocations.
“MCP is TCP/IP for AI agents,” Guido said during his session. “And right now, we are in the equivalent of 1995 — everything is running without firewalls.”
Who’s Affected
Enterprise security teams face the most immediate pressure. Organizations that have deployed AI agents in customer service, IT operations, software development, and financial analysis now need to audit permissions, implement monitoring, and establish incident response procedures for a class of software that behaves fundamentally differently from traditional applications.
The vendor ecosystem is responding. At RSAC 2026, at least 14 companies announced AI-specific security products. Protect AI, which raised $60 million in late 2025, launched an agent firewall that intercepts and inspects tool calls in real time. Lasso Security demonstrated a prompt injection detection system with a claimed 94.7% detection rate. HiddenLayer announced runtime monitoring for MCP connections.
Developers building on agent frameworks — LangChain, CrewAI, AutoGen, and others — are also directly affected. The lack of built-in security primitives in these frameworks means that security is currently the responsibility of each individual developer, a model that has historically produced poor outcomes.
What’s Next
RSAC 2026 is expected to catalyze concrete action. The Cloud Security Alliance announced a new AI Agent Security Working Group that will publish its first framework by Q3 2026. NIST is reportedly developing an AI agent security supplement to its AI Risk Management Framework, with a draft expected in late 2026.
The gap, however, remains wide. As Rehberger noted in his closing remarks: “The average enterprise deployed its first AI agent 11 months ago. The average enterprise started thinking about AI agent security 3 months ago. That 8-month gap is where the breaches will come from.” Whether organizations close that gap before attackers exploit it will define the next phase of enterprise AI adoption.
