Anthropic accidentally included a debug file in a recent update to Claude Code. That file let people reconstruct the entire internal codebase: roughly 500,000 lines of code across nearly 2,000 files. It wasn’t a hack or breach—it was a packaging mistake. Anthropic cited “human error.” No customer data or AI model secrets were exposed. What leaked was the scaffolding around the AI, the layer that decides how Claude Code thinks, acts, and talks to you.
The reconstructed code hit GitHub and became one of the fastest-starred repos in the platform’s history before Anthropic started issuing takedowns. People found an always-on background agent mode codenamed “KAIROS,” a “dream” mode for continuous ideation, and Tamagotchi-style pet behavior baked into the tool. (See for yourself! Type /buddy and see what happens.) Ars Technica has a good breakdown of what the code reveals about where Anthropic is headed.
A developer in France named Zack mapped the entire codebase and created this microsite to illustrate what happens when you send a message to Claude Code. Fascinating.


