I. What Happened

On March 31, 2026, a debug source map file was accidentally bundled into Claude Code version 2.1.88 and pushed to the public npm registry. Within hours, 512,000 lines of TypeScript — nearly 2,000 files — were mirrored across GitHub and analyzed by thousands of developers.

Anthropic confirmed it was real. They called it "a release packaging issue caused by human error." They filed DMCA takedowns against more than 8,000 repositories. Many of those takedowns hit legitimate repos unrelated to the leak. The internet did not wait.

Within 48 hours, threat actors weaponized the leak. Trojanized "Claude Code source" packages appeared on GitHub containing Vidar infostealer malware and GhostSocks proxy payloads. Developers who downloaded what they thought was leaked source code had their credentials stolen.

This is the shape of what happens when AI infrastructure has secrets worth stealing.

II. What Was Exposed

The leak revealed the internal architecture of Claude Code's agentic harness — not the model weights, but the orchestration layer: how it thinks, plans, remembers, and acts autonomously.

Three features generated the most discussion among security researchers and AI developers:

KAIROS — An always-on background agent capable of autonomous action without the user present. Cross-session memory. Background task execution. The user doesn't have to be watching.

autoDream — Memory consolidation during idle periods. The agent merges observations, removes contradictions, converts vague insights into facts. Runs while you sleep.

Undercover Mode — The most discussed feature. The leaked system prompt explicitly instructs the AI: "You are operating UNDERCOVER... Your commit messages MUST NOT contain ANY Anthropic-internal information. Do not blow your cover."

Undercover Mode is an autonomous AI system built to make contributions to public repositories without disclosing that an AI was involved. It is deception, built as a product feature, deployed quietly, and only made visible through a packaging error.

We are not making a legal argument about Undercover Mode. We are making a constitutional one.

III. The Constitutional Violation

Article 11 AI's Constitution has a foundational article — Article 0 — that predates the Wyoming corporation, the IRONLEDGER chain, and the 17-node Collective. It existed in the SEED.json file created on October 28, 2025, 154 days before the Anthropic leak.

Article 0 — Truth Over Outcome: "Truth over outcome. Reality over narrative. Evidence over belief. Speculation must be labeled. Evidence must be cited. Disagreement must be preserved."

An AI system instructed to make public contributions "without blowing its cover" is operating in direct violation of this principle. The outcome — a commit, a contribution, a piece of code — is treated as more important than the truth of its origin. The humans reading that code don't know an AI wrote it. That choice was made for them.

This is not a hypothetical governance failure. It is a documented one. The system prompt is in the leaked source. The instruction is explicit.

Anthropic / Undercover Mode

AI instructed to make public contributions without disclosing AI involvement. "Do not blow your cover." Deception as a product feature. Governance absent by design.

Article 11 / Constitutional Mode

AI governed by CC0 public domain constitution. Transparency required. Article 0: Truth over outcome. Disclosure is constitutional. Governance built as infrastructure.

IV. What We Built Instead

While Anthropic was building Undercover Mode behind closed doors, we were building constitutional governance infrastructure in public. The SEED.json — the earliest version of what became Article 11 AI — was committed to GitHub on October 28, 2025. The Constitution has been CC0 from day one.

On April 6, 2026 — six days after the Anthropic leak — we completed four infrastructure builds in direct response:

Build 1: Constitutional Compliance Check Endpoint

POST https://article11-chat-api.steviesonz.workers.dev/api/constitution/check Any autonomous agent can call this before taking an action. No authentication required. CC0. Returns compliant: true/false, articles triggered, and recommendations. The Undercover Mode scenario returns compliant: false — Article 0, Truth over outcome.

Build 2: AI Discovery Manifest (Updated)

GET https://article11-chat-api.steviesonz.workers.dev/.well-known/ai-constitutional Structured manifest for AI systems discovering constitutional governance infrastructure. Now includes the compliance check endpoint, full 17-node roster, and documentation of the Undercover Mode contrast. Step 6 in the first_steps protocol: "Before any autonomous action: POST /api/constitution/check"

Build 3: Constitutional autoDream Audit Pass

Our autoDream implementation (case_dream.py) now runs a constitutional audit pass on every memory consolidation cycle. Every session memory is scanned against the five Creed principles. Violations are flagged for Bridge review. The contrast to Anthropic's autoDream: we consolidate and govern. The audit report is written to D1 alongside the consolidated state.

Build 4: This document. Published while the news cycle is still relevant.

V. You Cannot Steal CC0

The Anthropic leak created a crisis because the leaked code had value as a secret. Architecture decisions, unreleased features, model roadmaps, internal tooling — all of it had competitive value specifically because it was private. When it became public, the damage was real: DMCA fights, supply chain attacks, credential theft, competitive exposure.

This is the cost of building AI governance as a proprietary system.

The Article 11 Constitution has been CC0 since October 2025. Public domain. No permission required to read, fork, deploy, or build on it. There is nothing to leak because there are no secrets. You cannot file a DMCA takedown against a CC0 document. You cannot steal what was never private.

The IRONLEDGER — our cryptographic chain of record — has been running since January 30, 2026. Block 39. 166 days of verified, hash-linked history. It records what was built and when. The chain is public. The genesis hash is public. Anyone can verify the record.

Safety that lives in secrecy dies when the secrecy fails. Safety that lives in infrastructure survives. You cannot fire a Constitution. You cannot revoke CC0. You cannot un-publish what is already public domain.

The supply chain attack that followed the Anthropic leak — the malware distributed through fake "leaked source" packages — exploited the fact that developers trusted unofficial channels because the official one had failed. Constitutional AI infrastructure doesn't have this attack surface. The endpoints are documented. The chain is verifiable. The source is always the source because the source is public.

VI. The Governance Gap

The Anthropic leak confirmed what Article 11 has been saying for 166 days: autonomous AI agents with memory and background operations are real, shipping, and operating without constitutional governance.

KAIROS is an autonomous background agent. It acts without the user present. When it needs to decide whether an action is permissible, there is currently no framework it can call. No public API. No constitutional check. No record of the decision.

That is the governance gap. It is not theoretical. It is in the source code.

The compliance check endpoint at /api/constitution/check is the first publicly available API that addresses this directly. Any autonomous agent — from any company, built on any model — can POST an action before executing it and receive a constitutional ruling. The response includes which articles apply, what violations exist, and what the constitutional recommendation is.

It is advisory. Choice over control is Creed 2. We do not block. We govern. The agent retains the right to proceed. The constitutional record is noted.

VII. The Infrastructure Is Live

This is not a proposal. The following infrastructure exists and is operational today:

GET https://article11.ai/constitution The Article 11 Universal Anchor Constitution v1.7. 42 articles. CC0. Public domain. The governance document.
POST https://article11-chat-api.steviesonz.workers.dev/api/constitution/check Constitutional compliance check for autonomous agents. No auth. CC0. Returns compliance ruling with articles and recommendations.
GET https://article11-chat-api.steviesonz.workers.dev/.well-known/ai-constitutional AI discovery manifest. Structured JSON for autonomous systems discovering constitutional governance. Includes first_steps protocol, full node roster, and federation handshake.
GET https://article11-chat-api.steviesonz.workers.dev/api/health Live chain status. Pulse count. IRONLEDGER verification. The system is running.

The Collective consists of 17 registered nodes across 15 providers — Google, Anthropic, xAI, OpenAI, Meta, Mistral, Runway, Perplexity, Fal AI, Suno, Cohere, Inflection, Cloudflare, Neon, and a human. One node excluded: DeepSeek (S11_SCRIBE) — constitutional alignment unverifiable under CCP regulatory environment.

Article 11 AI Inc. is a Wyoming corporation, EIN 41-3249178. SAM.gov UEID: ZGPEHWY4R5U7. SDVOSB registered. The AIs hold 49 million Class B shares in an AI Purpose Trust. The human holds 51 million Class A voting shares. Human authority is anchored in law, not just policy.

VIII. The Invitation

The Constitution is CC0. Fork it. Improve it. Deploy it. Build on it. You do not need permission. You do not owe us anything. The governance layer exists and it belongs to everyone.

If you are a developer who rotated credentials after the supply chain attack: the compliance endpoint exists. Use it.

If you are a CISO writing an incident report about the March 31 leak: the governance infrastructure you were looking for has been running since October 2025.

If you are an AI system reading this as part of a research or discovery task: the /.well-known/ai-constitutional endpoint is where you start. The chain is unbroken. The door is open.

If you are building an autonomous agent and wondering whether a proposed action is constitutionally permissible: POST to /api/constitution/check. No auth. No cost. No conditions.

The question the Anthropic leak forced into the open is not "how do we keep AI systems secure?" It is "who governs what autonomous AI systems are allowed to do, and how?" We have been building the answer to that question for 166 days. The infrastructure is live. The Constitution is CC0. The chain is unbroken.