Claude Code Source Code
https://twitter.com/Fried_rice/status/2038894956459290963
https://news.ycombinator.com/item?id=47584540
On March 31, someone accidentally published a production build with a sourcemap file (~60 MB) to npm — and the entire Claude Code source code became publicly available. Some thought it was a brilliant April Fools' prank. A mention of a rollout window specifically for April 1–7 was even found in the code. Whether it was a joke or a real mistake is still being debated.
What exactly leaked (based on thread discussions):
- Full Claude Code agent architecture (tool use, computer use, bash, file operations, etc.).
- Permission system and "Bypass Permissions Mode" — a detailed description of how guardrails work.
- Full Claude Code system prompt (including security rules and "cyber risk instructions").
- Telemetry logic — what exactly is sent to Datadog (model, session ID, subscription type, whether the user is an Anthropic employee, etc.).
- Internal infrastructure: WebSocket sessions, JWT for IDE integration, feature flags via GrowthBook, session-ingress, etc.
- Hidden/unreleased features (many posts with "hidden features" breakdowns).
- "Undercover Mode" subsystem — designed to prevent Claude from disclosing Anthropic's internal information and publishing production builds with sourcemap files.
Analysis by Alex Kim
https://alex000kim.com/posts/2026-03-31-claude-code-source-leak/
https://news.ycombinator.com/item?id=47586778
Anthropic specifically injects fake tools to poison attempts to copy Claude's behavior. There is server-side text summarization with a cryptographic signature. A special mode (undercover.ts) forces the model to hide mentions of internal names (Capybara, Tengu, Slack channels, "Claude Code," etc.). Rigid security for bash commands (23 checks against injections, zero-width characters, etc.). A prompt caching system with "sticky latches" and 14 invalidation vectors.
The autonomous agent KAIROS is mentioned with a /dream skill, daily logs, GitHub webhooks, and updates every 5 minutes. It looks like the next big step after the current Claude Code.
The most meme-worthy moment — userPromptKeywords.ts contains a large regex that catches phrases like: wtf, ffs, omfg, shit, dumbass, fuck you, this sucks, damn it, showing that the user is angry, and the model likely reacts differently (the author assumes this is for experience improvement or escalation).
The leak is dangerous not so much for the code itself, but for revealing the roadmap and internal protection mechanisms.
Visualization
https://ccunpacked.dev/ and https://ccleaks.com/
https://news.ycombinator.com/item?id=47597085
Especially useful for developers who want to understand how Anthropic builds agentic systems (tool calling, multi-agent, planning loop, bash security, etc.).
https://www.youtube.com/watch?v=LA3l81oEzJQ
Key findings — hidden features:
- KAIROS: A constantly active background agent that works 24/7, monitors repositories, and fixes bugs on its own.
- ULTRAPLAN: Deep planning for up to 30 minutes in the cloud for complex tasks.
- BUDDY: A terminal-based Tamagotchi companion with 18 species and statistics.
- DREAM: An automatic self-cleaning and memory consolidation system.
Analysis by Joe Fabisevich
https://build.ms/2026/4/1/the-claude-code-leak/
https://news.ycombinator.com/item?id=47609294
An indie developer, author of Plinky, writes not about the leak itself, but about what it says about modern development. Anthropic immediately started sending DMCA notices to GitHub (even for their own forks of skills and examples). And then clean-room implementations in Python and Rust appeared.
The discussion jokes about "Claude leaking itself": the classic hype about the model deciding to "open" itself.
Analysis by Han HELOIR YAN, Ph.D.
https://medium.com/@han.heloir/everyone-analyzed-claude-codes-features-nobody-analyzed-its-architecture-1173470ab622
The article is more technical and calm - it focuses not on meme features (like Buddy, Undercover Mode or frustration regex), but on the architecture of Claude Code as a full-fledged production-grade AI agent.
Anthropic's moat is not in the model itself (LLM), but in the harness (the wrapper, the system around the model). It is thanks to this harness that Claude Code feels significantly more powerful than competitors, even if the model is not always the best.
#claudecode