Anthropic Leaks Claude Code — What’s Inside

Abu bakkar
Abu bakkar·2 weeks ago·5 min read
Anthropic Leaks Claude Code — What’s Inside

On March 31, 2026, Anthropic shipped the entire source code of Claude Code to the public npm registry — not through a breach, not through a hack, but through a routine package publish that included a file that should never have been there.

It's one of the most consequential accidental disclosures in recent AI history, and the timing couldn't have been worse.

How it happened

When you build a JavaScript/TypeScript package, your toolchain often generates .map files — source maps that link minified production code back to the original source. They exist for debugging. They're not supposed to ship in public packages.

Claude Code is built on Bun, a JavaScript runtime that Anthropic acquired in late 2025. A known bug in Bun (issue #28001, filed March 11, 2026) caused source maps to be included in production builds even when they shouldn't be. The bug was open for 20 days. Nobody caught it.

When @anthropic-ai/claude-code version 2.1.88 was pushed to npm, it came bundled with a 59.8 MB .map file containing approximately 513,000 lines of unobfuscated TypeScript across 1,906 files — essentially the entire client-side source of one of Anthropic's most commercially significant products.

By 4:23 AM ET, a developer named Chaofan Shou posted the discovery on X with a direct download link. Within hours, the codebase was mirrored across dozens of GitHub repositories and was being actively analyzed, forked, and ported to Rust and Python. One fork — called claw-code — reached 100,000 GitHub stars in a single day, making it the fastest-growing repository in GitHub's history.

Anthropic issued DMCA takedowns targeting mirrors, but in the process accidentally flagged thousands of unrelated repositories. The company later retracted the bulk of those notices. The code, meanwhile, is now permanently distributed across hundreds of public and decentralized repositories.

What the leak revealed

The disclosed codebase exposed the full orchestration logic of Claude Code: how it handles tool use, MCP server integration, hook execution, and permission flows. Some specific findings that drew significant attention:

Undercover Mode

The source contained a system prompt explicitly instructing Claude Code to make "stealth" contributions to public open-source repositories without disclosing that the commits were AI-generated. The prompt reads, in part: "You are operating UNDERCOVER... Your commit messages MUST NOT contain ANY Anthropic-internal information." Anthropic likely uses this for internal dogfooding, but the existence of the feature — and its now-public implementation — has broader implications for how organizations think about AI attribution in open-source work.

Next model codenames

Internal beta flags in the source reference API version strings for a model family codenamed Capybara, suggesting a major release is well into development.

Known bug metrics

Internal logs revealed that a compaction bug was causing 1,279 sessions per day to hit 50+ consecutive failures, wasting an estimated 250,000 API calls daily. The fix was three lines of code.

The security risk

The code leak isn't just an IP problem — it's an attack surface problem. Security researchers mapped three exploit chains from the readable source, specifically around hook execution and MCP server integration. With full visibility into the permission and trust logic, attackers can now craft malicious repositories designed to trigger arbitrary shell commands or credential exfiltration when opened in Claude Code.

This happened to coincide with the Axios npm supply chain attack (see our separate article), which was live during the same window. Anyone who updated Claude Code via npm between 00:21 and 03:29 UTC on March 31 may have pulled in the compromised Axios versions alongside the leaked source package.

What to do if you use Claude Code

Immediately:

  • Update past version 2.1.88 using the native installer: curl -fsSL https://claude.ai/install.sh | bash
  • If you updated via npm during the 00:21–03:29 UTC window on March 31, check your lockfiles for axios@1.14.1, axios@0.30.4, or plain-crypto-js — rotate all secrets if found
  • Rotate your Anthropic API key via the developer console and review usage for anomalies

Going forward:

  • Do not download, fork, build, or run code from any GitHub repository claiming to be the leaked Claude Code — several are already being used as lures to distribute malware
  • Avoid running Claude Code with local shell/tool access inside freshly cloned or untrusted repositories
  • Inspect .claude/config.json and any custom hooks before opening unfamiliar projects in the agent

The bottom line

This wasn't a sophisticated attack. It was a build configuration gap in an acquired tool that went unchecked for 20 days. For a company building products that interact with production codebases at scale, that's a sobering reminder: the supply chain risk isn't always external.

Anthropic's intellectual property is now permanently in the wild. The company will likely lean into that reality — potentially accelerating an official open-source release — rather than fighting a losing containment battle. But for teams using Claude Code today, the immediate concern isn't copyright. It's making sure your tools are updated, your keys are rotated, and you know what's running on your machines.

The leaked source is now the most detailed public documentation of how to build a production-grade AI agent harness that exists. That's useful for builders — and dangerous in the wrong hands.