- A misconfigured .map file in the
@anthropic-ai/claude-codenpm package version 2.1.88 exposed ~512,000 lines of TypeScript source across ~1,900 files on March 31, 2026. - Developers discovered 187 hardcoded spinner verbs, a sentiment system that flags prompts containing profanity as negative, undercover mode, anti-distillation decoy tools, and dozens of unreleased features behind flags.
- Anthropic confirmed the incident was “a release packaging issue caused by human error, not a security breach” — but then accidentally took down 8,100+ unrelated GitHub repos with overly broad DMCA notices.
- The mirrored repository surpassed 84,000 stars and 82,000 forks before GitHub began enforcing targeted takedowns, making the code effectively permanent on the public internet.
What Happened
At approximately 4:23 am ET on March 31, 2026, security researcher Chaofan Shou posted on X: “Claude code source code has been leaked via a map file in their npm registry!” Within minutes, the developer community was pulling apart a 59.8 MB JavaScript source map bundled into version 2.1.88 of the @anthropic-ai/claude-code package — a file intended solely for internal debugging that had no business being in a public release.
The .map file did not contain the source directly. It pointed to a zip archive hosted on Anthropic’s own Cloudflare R2 storage bucket, which in turn contained the complete TypeScript codebase: 512,000 lines across 1,906 files. The mistake is a textbook packaging error — someone forgot to add *.map to the .npmignore, or the bundler was not configured to suppress source map generation for production builds.
Anthropic confirmed the incident in a statement carried by multiple outlets: “This was a release packaging issue caused by human error, not a security breach. We’re rolling out measures to prevent this from happening again.” A senior executive separately told Bloomberg the exposure resulted from “process errors” tied to the company’s fast release cycle. source
What Was Exposed
The archive contained the full engine powering Claude Code: LLM API call handling, streaming responses, tool-call loops, thinking mode, retry logic, token counting, permission models, and the complete tool suite. It also exposed dozens of feature flags for capabilities that had never been publicly announced.
187 spinner verbs. Among the first things developers catalogued was a hardcoded list of 187 present-participle verbs used to animate Claude Code‘s loading spinner. The list includes mundane entries alongside words like “hullaballooing,” “razzmatazzing,” “recombobulating,” and “topsy-turvying.” A GitHub repo reproducing the full spinner simulator accumulated stars within hours of going live. source
A swear word detector. The analytics system logs user prompts as carrying a negative sentiment flag whenever profanity is detected. Claude Code creator Boris Cherny had previously referred to the internal telemetry visualization for this signal as the “fucks” chart. The mechanism appears designed to help Anthropic determine when users are having a poor experience — a reasonable product signal, though its existence was not publicly disclosed. source
Undercover mode. A file called undercover.ts implements a mode that strips all traces of Anthropic internals when Claude Code operates in non-internal repositories. Under this mode, the model is instructed to never mention internal codenames such as “Capybara” or “Tengu,” internal Slack channels, internal repo names, or the phrase “Claude Code” itself. Practically, this means AI-authored commits or pull requests from Anthropic employees in open-source projects would carry no indication that an AI wrote them. source
Anti-distillation decoy tools. When the ANTI_DISTILLATION_CC flag is active, Claude Code injects fake tool definitions into API requests. The mechanism is designed to pollute training data for anyone intercepting Claude Code’s API traffic to train a competing model. A second anti-distillation layer uses server-side text summarization with cryptographic signatures. source
KAIROS daemon. A feature flag called kairosActive activates a background agent mode that operates without the terminal UI, suppresses the status bar, disables planning mode, and silences the AskUserQuestion tool. The code suggests it is designed as a persistent headless assistant capable of running tasks without any user present. source
Memory system. The leaked code confirmed that Claude Code’s memory relies on a lightweight MEMORY.md index containing ~150-character pointer entries that are always loaded into context. The index stores locations, not data. Actual project knowledge lives in topic files fetched on demand. Raw session transcripts are never fully re-read but are instead searched with grep for specific identifiers. The agent is also instructed to treat its own memory as a hint and verify facts against the live codebase before acting. source
System prompts client-side. Most of Claude Code’s system prompts are assembled and injected client-side before the API call is made, rather than being applied server-side. This means the full instruction set governing Claude Code’s behavior was visible in the leak. source
The Architecture at a Glance
The codebase follows a modular plugin architecture. Each capability — file reading, bash execution, web fetching, LSP integration — is a discrete, permission-gated tool module. Analysts counted approximately 40 tool modules across the system, with bash security validation alone running 23 sequential checks on every shell command to cover scenarios including Unicode zero-width space injection, IFS null-byte injection, and a malformed token bypass documented from a prior HackerOne review.
The code also contained animal codenames — Tengu, Fennec, Capybara — and references to feature names including Penguin Mode, Dream System, and a Tamagotchi pet system with gacha mechanics. A single function inside the source ran to 3,167 lines. source
Security Implications
Anthropic maintained that no customer data or credentials were exposed and that the incident did not constitute a security breach. Security researchers largely agreed with that framing but flagged second-order risks.
The exposed source provides a detailed map of Claude Code’s four-stage context management pipeline, giving attackers a blueprint for crafting prompt injection payloads designed to survive context compaction and persist across long sessions. It also enables malicious forks that repackage Claude Code with inserted backdoors — versions that would be difficult to distinguish from the legitimate package without hash verification.
The timing added further concern: in the hours around the source map publication, malicious versions of the axios npm package containing a remote access trojan were separately live on the same registry. There is no confirmed connection between the two incidents, but enterprise security teams flagged the coincidence. source
Anthropic’s DMCA Response and Its Fallout
Anthropic moved quickly to suppress mirrors, filing DMCA takedown notices with GitHub. The company ultimately targeted over 8,100 repositories. The effort immediately ran into trouble: notices were sent not only to mirrors of the leaked code but also to legitimate forks of Anthropic’s own public Claude Code repository — forks that had nothing to do with the leak.
Boris Cherny, Anthropic’s head of Claude Code, publicly acknowledged the error: “This was not intentional, we’ve been working with GitHub to fix it.” Anthropic retracted notices for all but one repository and 96 forks confirmed to contain the actual leaked source. source
The DMCA campaign had limited practical effect. Mirrored versions spread to decentralized platforms and independent clean-room rewrites that do not host the leaked source directly. By the time takedowns began, the primary mirrored repository had surpassed 84,000 stars and 82,000 forks. The code is, for practical purposes, permanent. source
What Comes Next
For developers, the leak has shifted from a curiosity to a reference document. The modular architecture, memory design, and tool permission patterns are already being discussed as blueprints for building agentic systems. Independent rewrites in languages including Rust are using the architectural insights — without hosting the leaked code — to build compatible tooling.
For Anthropic, the more immediate question is process. The company has said it is implementing new packaging controls. Whether those controls extend to the broader release pipeline — and whether any of the exposed unreleased features ship in a changed form — remains to be seen.
The incident also settled a long-running debate about what Claude Code actually does with user data at the edges. Some answers, like the profanity sentiment flag, were unexpected. Others, like the client-side system prompt assembly, will inform how security teams evaluate the tool going forward. source
