Anthropic’s Model Context Protocol (MCP) crossed 97 million installs in March 2026 — a number that would have been laughable as a 12-month projection when the protocol quietly launched in October 2024. Today, MCP is the default mechanism by which AI agents connect to external tools, APIs, and data sources. The protocol nobody heard of 18 months ago is now as load-bearing as any piece of infrastructure in modern software.
The Numbers Don’t Lie
97 million installs in roughly 18 months places MCP among the fastest infrastructure adoptions in recent software history. For context, npm took years to reach comparable developer penetration; Docker, widely considered a rapid-adoption success story, needed about three years to cross 100 million container pulls. MCP hit this territory in a fraction of the time, driven by a market actively starving for a standard way to connect AI models to the real world.
The growth curve was not linear. According to Anthropic’s tracking data, the first 10 million installs took until early 2025. The jump from 10 million to 97 million happened in roughly 12 months — a 9.7x acceleration driven by what engineers call a “compatibility cascade”: once a protocol becomes the easiest way to build something, every adjacent tool starts supporting it rather than explaining why it doesn’t.
What MCP Actually Does — And Why It Took This Long to Exist
Before MCP, connecting an AI agent to an external tool required custom API wrappers, bespoke context-injection logic, and significant engineering overhead per integration. Every team building agents solved the same connection problem independently. MCP standardized the handshake: a server exposes resources and tools in a defined schema, a client (the AI host) reads that schema and makes structured calls. The model knows what tools are available and how to invoke them without hardcoded instructions.
This matters because the core limitations of language models — inability to take real-world actions, access live data, or maintain state across sessions — are all addressed at the tool-connection layer. MCP doesn’t make models smarter. It makes them operational.
The protocol defines three primitives:
- Resources — data the model can read (files, database records, API responses)
- Tools — actions the model can invoke (write a file, call an endpoint, query a database)
- Prompts — reusable prompt templates the model can reference and execute
Deliberately minimal. Broadly implementable. Not constraining what any individual server exposes.
MegaOne AI tracks 139+ AI tools across 17 categories, and the shift in how those tools expose their capabilities has been measurable. In late 2024, fewer than 8% of tracked developer tools offered MCP-native integrations. By Q1 2026, that number exceeds 61%.
Every Major AI Company Has Signed On — Including the Competitors
The protocol’s legitimacy arrived not from Anthropic’s authority but from competitive adoption. OpenAI — whose relationship with Anthropic ranges from frosty to legally adversarial — shipped native MCP support in early 2025. Google integrated MCP across the Gemini API and Vertex AI platform. Microsoft embedded it into GitHub Copilot’s extension framework. When three companies competing for the same enterprise contracts all support the same protocol without a standards body forcing them to, that protocol has won.
OpenAI has been aggressive about capturing enterprise AI infrastructure at scale, a pattern visible across its recent strategic moves including its reported $1 billion Disney content agreement. MCP adoption fits the same infrastructure land-grab: not just a developer convenience but a positioning move to ensure its models remain the preferred runtime when agents start doing real work.
The tooling ecosystem followed the model providers. Slack, Notion, Linear, GitHub, Cloudflare, Stripe, and dozens of database providers now publish official MCP servers. The GitHub MCP server alone logged over 4 million pulls in February 2026. When enterprise infrastructure at this scale ships MCP support, it stops being a protocol choice and becomes a dependency.
The HTTP Parallel Is Structurally Accurate, Not Just Promotional
The comparison circulating across engineering blogs — MCP as the HTTP of AI agents — holds up under scrutiny. HTTP didn’t make individual websites better; it made websites possible at scale by standardizing how clients and servers communicate. Before HTTP, every networked application spoke its own dialect. After HTTP, the web compounded: a page built in 1995 can receive traffic from a browser built in 2026 because both speak the same protocol.
MCP does the same for agent-to-tool communication. Individual agent implementations existed before MCP — often better-tuned for specific use cases. But they didn’t compound. Every custom integration was an engineering dead end, unusable by any model or framework other than the one it was built for. MCP integrations are additive: a tool built to MCP spec works with Claude, GPT-4o, Gemini, Llama, and whatever model ships next quarter, without modification.
The counterargument — that MCP creates a single point of architectural failure, or that a competing protocol could emerge and displace it — deserves honest treatment. Protocols do get replaced. SOAP didn’t survive REST. But SOAP was verbose, proprietary in spirit, and actively fought by the developer community. MCP is lean, open-source by design, and embraced by the exact engineers who would build its replacement if they wanted one.
What 97 Million Installs Actually Represents
Install counts are an imprecise metric. 97 million installs doesn’t mean 97 million active agents making tool calls right now. It means 97 million development environments, CI/CD pipelines, and production deployments have MCP capability configured. Infrastructure providers estimate actual utilization — agents making live tool calls against MCP servers in production — at 15 to 20 million daily active connections as of March 2026.
That ratio of installs to active use is entirely typical for infrastructure packages. Most environments configured for Docker or Kubernetes aren’t running containers at every moment. The 15-20 million daily connection figure is the more honest signal, and it implies genuine usage at a scale that justifies the protocol’s position as foundational infrastructure rather than a developer experiment.
Anthropic’s decision to open-source MCP from launch day removed the primary obstacle to enterprise adoption. Procurement teams don’t fight open protocols the way they fight proprietary APIs. That decision — reportedly debated internally at Anthropic, according to sources familiar with the company’s 2024 engineering roadmap — may be the most strategically significant choice the company made that year, a period marked by significant internal scrutiny as the accidental public release of Claude agent source code made clear.
The Agent Ecosystem Now Depends on MCP
The deeper implication of MCP’s adoption is what it reveals about the direction of AI development. Autonomous agents — systems that plan, decide, and execute multi-step tasks with minimal human intervention — require reliable tool access to be useful beyond conversation. The agent frameworks with the most enterprise traction in 2026 (LangGraph, CrewAI, AutoGen, and Anthropic’s Claude agent runtime) all use MCP as their primary tool-connection layer, not as an option but as default infrastructure.
Nomad-style autonomous discovery systems, like those analyzed in MegaOne AI’s coverage of Nomad’s autonomous exploration capabilities, represent the forward edge of where MCP-enabled agents are heading: systems that don’t just call predefined tools but dynamically discover and invoke available resources at runtime. MCP’s schema-first design makes this tractable in a way bespoke integrations never could — because an agent can read a server’s capabilities manifest and reason about what to call, rather than relying on hardcoded instructions.
The broader concern — that increasingly capable autonomous systems raise legitimate questions about human oversight when agents can connect to anything — is real and increasingly organized. The Humans First movement is partly a response to exactly this kind of infrastructure becoming invisible: when MCP makes tool access frictionless, the question of what agents should connect to becomes as important as the question of what they can connect to.
The Infrastructure Has Caught Up to the Ambition
97 million installs is not the story. The story is that the AI agent ecosystem now has a shared language for tool communication, and it arrived faster than any previous infrastructure standard at comparable scale. Developers building agents in 2026 don’t treat tool connections as a bespoke engineering problem to solve fresh for each project. They treat it the way web developers treat fetch(): solved, standardized, available.
When a protocol disappears into the background and stops requiring justification in architecture reviews, it has become infrastructure. MCP is there. The question for every AI team now is not whether to build on MCP, but how far to extend what it makes possible.