A single packaging mistake turned into a very public look behind the curtain at one of the most popular AI coding tools around. On March 31, Anthropic shipped version 2.1.88 of its @anthropic-ai/claude-code npm package with something it definitely did not intend to include: a 59.8 MB source map file containing the tool's entire command line interface source code.
How 512,000 lines ended up on GitHub
Security researcher Chaofan Shou was first to spot the exposed file, posting about it on X with a link to the contents (since taken down). From there, the code moved fast. The full codebase landed in a public GitHub repository and was forked tens of thousands of times within hours. At that point, the cat was very much out of the bag.
The leaked package contains nearly 2,000 TypeScript files and more than 512,000 lines of code. That is not a small slip. That is the entire CLI source, sitting in plain view for anyone curious enough to pull it apart.
danger
The leaked code covers Claude Code's CLI only, not the underlying AI models themselves. Anthropic confirmed no sensitive customer data or credentials were exposed in the incident.
Anthropic moved quickly with an official statement, delivered to multiple outlets: "Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again."
What researchers found inside
Here's the thing: once code like this is public, people read it. Fast. Within hours of the leak, developers were already picking through the source and posting findings. One widely circulated observation described an "insanely well-designed" three-layer memory system, which the code apparently refers to as "self-healing memory." That is exactly the kind of architectural detail Anthropic would prefer to keep internal.
The broader security picture is worth paying attention to. According to a detailed breakdown from VentureBeat, security analysts have already mapped out multiple attack paths that the exposed code enables, including supply chain risks and typosquatting opportunities where bad actors could publish malicious packages mimicking the real thing.

Claude Code CLI in action
The timing makes this harder to brush off
This is the second reported data exposure from Anthropic within a single week. Just days earlier, a separate leak revealed the existence of an unreleased model codenamed Mythos, described as a significant step up in capabilities. Two incidents back-to-back in a market that is getting more competitive by the month is not a great look, regardless of how Anthropic frames either one.
To be fair, the distinction matters: CLI source code is not model weights, training data, or user information. Developers building on top of Claude Code are not directly at risk here. But as The Hacker News reported, the supply chain angle is real. When source code for a widely used developer tool is freely available and already forked at scale, the window for someone to build a convincing lookalike package opens considerably.
What this means for developers using Claude Code
For anyone actively using Claude Code in their workflow, the immediate practical risk is low. No API keys, no user data, no model internals. What you will want to do is stay alert to any unofficial or third-party npm packages claiming to be Claude Code or adjacent tools, since the leaked source makes it easier to build convincing fakes.
The key here is that Anthropic has confirmed it is rolling out packaging process changes to prevent a repeat. Whether those measures are retroactive to the already-forked repositories is a separate question entirely. For the latest on AI developer tools and how incidents like this affect the broader dev ecosystem, make sure to check out more:







