- Anthropic’s Claude models became enabled by default in Microsoft 365 Copilot for most commercial tenants worldwide as of January 7, 2026.
- Anthropic now operates as a Microsoft subprocessor under Microsoft’s Product Terms and Data Protection Addendum, eliminating the need for separate vendor agreements.
- EU, EFTA, and UK tenants have Anthropic set to off by default due to data residency constraints — Claude is excluded from Microsoft’s EU Data Boundary commitments.
- Government clouds (GCC, GCC High, DoD) have no access to Anthropic models and no admin toggle is available.
What Happened
Microsoft enabled Anthropic’s Claude models by default across most commercial Microsoft 365 Copilot tenants starting January 7, 2026. The change shifted Anthropic from an opt-in third-party provider to a default subprocessor operating under Microsoft’s existing Product Terms and Data Protection Addendum. Organizations that had previously needed to accept separate Anthropic commercial terms now have Claude active in their Copilot environments automatically, with no action required from administrators to enable the integration.
Charles Lamanna, President of Business and Industry Copilot at Microsoft, stated: “Copilot will continue to be powered by OpenAI’s latest models, and now our customers will have the flexibility to use Anthropic models too.”
Why It Matters
The subprocessor designation fundamentally changes the contractual relationship between enterprise customers and Anthropic. Rather than operating as an independent vendor requiring separate legal agreements, Anthropic now functions under Microsoft’s direction and data protection framework. For enterprise procurement teams, this eliminates a layer of vendor assessment — Claude’s data processing falls under the same DPA that governs the rest of Microsoft 365, streamlining compliance review processes.
The default-on configuration means Claude is actively processing organizational data through Copilot features like Agent Mode in Office apps unless an administrator explicitly disables it. Organizations that did not take action before January 7 may already have Claude handling internal data through Copilot workflows without explicit authorization from their security or compliance teams. This is a significant shift from the September 2025 arrangement, which required explicit opt-in and acceptance of Anthropic’s separate commercial terms before any data processing occurred.
Technical Details
Data processed through Claude in Copilot is transferred from Azure to Anthropic’s servers hosted on AWS or GCP datacenters, primarily located in the United States. This differs from OpenAI’s models, which operate within Azure infrastructure. Anthropic does not use customer data for training its models under the subprocessor arrangement.
A critical limitation: Anthropic models are excluded from Microsoft’s EU Data Boundary commitments and, where applicable, in-country processing guarantees. Organizations subject to GDPR or equivalent regional data residency requirements must evaluate whether enabling Claude introduces compliance risks for workloads involving personal data.
The administrator toggle for Anthropic models appeared in the Microsoft 365 admin center on December 8, 2025, giving IT teams roughly 30 days to evaluate and adjust settings before the January 7 default activation. The previous opt-in toggle requiring acceptance of Anthropic’s commercial terms was completely removed as part of this transition.
Who’s Affected
Most commercial tenants worldwide have Anthropic enabled by default. EU, EFTA, and UK organizations have the toggle set to off by default and must opt in to use Claude within Copilot. Government and sovereign cloud environments — GCC, GCC High, and DoD — have no access to Anthropic models at all, as Anthropic lacks FedRAMP certification for these environments despite holding FedRAMP High authorization for its separate government product.
Anthropic’s compliance posture has expanded to support enterprise adoption: the company now holds SOC 2 Type II, ISO 27001, ISO 42001, CSA Star, HIPAA, and NIST 800-171 certifications for its API and Enterprise offerings. The company’s Trust Center has published updated compliance documentation reflecting this broadening enterprise footprint and the additional obligations that come with subprocessor status under Microsoft’s agreements.
What’s Next
The integration represents Microsoft’s shift toward a multi-model AI strategy, with Claude being tested across Excel, PowerPoint, and GitHub Copilot alongside OpenAI’s models. For Anthropic, the Microsoft subprocessor role follows similar availability through Amazon Bedrock and Google Cloud’s Vertex AI, positioning Claude as available across all three major cloud platforms. The unresolved question for administrators is whether organizations in regulated industries have adequately assessed the data transfer implications of a default-on third-party AI subprocessor processing their Copilot interactions outside of Azure infrastructure.