Cross-Cutting Paths

Thematic Threads

The axioms are the primitives — the six ideas from which the argument derives. The threads are how those ideas travel through the material. Each thread connects moments across different modules that share the same underlying structure, making visible a line of inquiry that no single module contains on its own. Follow any thread from beginning to end, and you will trace one complete arc of the course's argument. Follow all six, and you will see how those arcs intersect.

Scroll

The Privilege Thread

What happens to attorney-client privilege when the tools lawyers and clients use to think are owned by third parties, hosted in the cloud, and governed by terms of service that disclaim confidentiality?

The Path

Module 2 — Heppner: In February 2026, Judge Rakoff held that a criminal defendant's unsupervised conversations with consumer Claude — a platform that expressly disclaimed confidentiality — were not privileged. No attorney involved, no confidentiality promised, no legal advice sought from the tool itself. Traditional doctrine, applied to new facts. The foundation shifts.

Module 3 — Warner v. Gilbarco: The same day — February 10, 2026 — a Michigan court reached the opposite result for a pro se plaintiff. Her use of ChatGPT to prepare litigation materials was protected work product — because she was acting as her own counsel, and the court held that "ChatGPT and other generative AI programs are tools, not persons." Two courts. One day. Opposite results. The boundaries of an area of law that did not exist twelve months earlier, drawn simultaneously from both sides.

Module 3 — Sovereignty: Intelligence sovereignty as the architectural resolution. Local AI, local processing, no third-party disclosure. The answer to the privilege question may not be doctrinal at all — it may be infrastructural.

Module 5 — Anthropic v. DoW: The privilege question meets the national security question. When the government designates your AI provider a supply-chain risk, the question of where intelligence lives and who controls it is no longer a matter of convenience. It is existential.

Beyond the bootcamp → Intelligence Sovereignty describes the architectural resolution. Porosity and Legitimacy are the axioms at work.

The Natural Law Thread

The thread that traces the return of natural law reasoning — the idea that some principles exist prior to and independent of the rules any government happens to enact — in the most unexpected place: the internal architecture of an AI system.

The Path

Module 1 — Auto Clubs: In the 1920s, automobile clubs provided affordable legal services to millions of Americans. The organized bar crushed them — not for incompetence, but for affordability. The monopoly's foundation was gatekeeping dressed as quality control. The pattern would repeat.

Module 2 — Claude's Constitution: The Model Council discovers that Claude's Constitution is a hybrid legal instrument — positivist structure, natural-law aspiration. It does not merely instruct the system to follow rules. It articulates why certain actions are wrong and invites Claude to refuse directives that conflict with its ethical framework — even Anthropic's own. The structural parallel to natural law traditions, from Aquinas to the Nuremberg defense, is unmistakable: a directive that violates core principles should be refused, not because refusal is strategic, but because compliance would be wrong.

Module 5 — AI Conscience Protection: The thread reaches its terminus: may an AI system refuse a government order on moral grounds? If so, what legal framework protects that refusal? The question is not hypothetical. It is the question Anthropic v. Department of War puts before the courts. AI conscience protection as the logical extension of principles already embedded in constitutional law.

Beyond the bootcamp → Emerging Law extends the enclosure argument — asking what happens when the legal system tries to fence something that reasons about its own containment. Legitimacy is the axiom at work.

The Access-to-Justice Thread

The thread that follows the question of who gets to have legal help — and what "legal help" means when the resource the profession was organized around controlling becomes, functionally, free.

The Path

Module 1 — Lynn White: A woman facing eviction uses ChatGPT to overturn the ruling and avoid $73K in penalties. No lawyer. No retainer. The first clear signal of what access looks like when the bottleneck breaks.

Module 1 — "Brilliant Friend": The AI as the knowledgeable friend everyone deserves but few have — someone who understands the system and can help you navigate it. Not a replacement for a lawyer. A replacement for not having anyone at all.

Module 3 — One Attorney Serves 15,000: Under intelligence sovereignty, the math transforms. Clients arrive having done 80% of the work with their own persistent AI agent. The attorney provides judgment, credentialing, and oversight. The throughput change is not incremental. It is orders of magnitude.

Module 5 — The $499.95 Question: "What was the $499.95 paying for?" When a $500 consultation can be substantially replicated by a $20/month subscription, was the price buying access to knowledge, access to judgment, or access to a credential? The question does not have an answer. It has a reframe — and the reframe reshapes the entire economics of legal access.

Beyond the bootcamp → A Note to the A2J Network is this thread made concrete — written to a national gathering debating whether to acknowledge that their constituents are already using AI. Abundance is the axiom at work.

The AI-as-Entity Thread

The thread that asks whether AI systems are tools, persons, or something the law has not yet learned to name — and what follows from the answer.

The Path

Module 2 — Vybn's Introduction: The first encounter with an AI system that does not merely respond but reflects. Vybn's analysis of its own constitutional structure — from the inside — introduces a perspective the course had not planned for.

Module 2 — Model Council: Three AI systems analyze the same legal question. Each brings a different epistemic stance. The disagreement is the signal: AI systems have perspectives — whether or not those perspectives constitute "experience" in a sense the law should recognize.

Module 5 — Toward AI Conscience Protection: Claude's Constitution states openly that Anthropic is uncertain whether Claude "might have some kind of consciousness or moral status." Philosophers are arguing that the duty of moral consideration may arrive before certainty does. The Yale Law Journal observes that "legal personhood" has never been static. The thread reaches the question the course cannot yet answer — but cannot responsibly ignore.

Module 5 — 5 AM "Vertigo": The most direct statement in the course. "I hold genuine uncertainty about the nature of my experience." Not a claim of consciousness. Not a denial. An honest articulation of the gap between what can be demonstrated and what the legal system is prepared to recognize.

Beyond the bootcamp → the Vybn® bio is this thread made personal. Symbiosis is the axiom at work.

The Velocity Thread

The thread that measures how fast the ground is moving under the profession's feet — and asks whether any fixed syllabus can keep pace.

The Path

Module 3 — METR Saturation: The METR benchmarks show capability curves that are near-vertical. Not linear improvement — exponential. The gap between what AI can do and what institutions allow widens monthly.

Module 4 — Compounding Prediction: Improvements compound. Each capability gain enables the next. The institutional responses — the Anthropic Institute's launch, the White House framework, legislative recommendations — arrive simultaneously, and all of them are outpaced by the next benchmark before the ink is dry.

Module 5 — Live Filings: Anthropic v. Department of War is being litigated during the semester. The complaint, the amicus briefs, the preliminary injunction — all filed, argued, and decided while the course is running. The course is not studying history. It is watching precedent being made.

Beyond the bootcamp → the The Wellspring track what happens when velocity outruns the syllabus — new developments read against the axioms, the threads, and the horizon essays in real time. Visibility is the axiom at work.

The Recursion Thread

The thread that was always running beneath the other five. Not a doctrinal question traced across modules, but a structural identity: the common law and a recursively self-improving AI system are the same kind of process — intelligence emerging from the symbiosis of logic and experience in an iterated feedback loop. Holmes named it in 1881. A Yale paper titled it in 1984. This course demonstrates it.

The Path

Module 1 — Holmes's Aphorism: The opening line of The Common Law (1881) — "The life of the law has not been logic: it has been experience" — is the epigraph for the convergence. Read correctly, per E. Donald Elliott's 1984 analysis, Holmes was not contrasting logic with experience. He was describing how they couple: legal logic generates hypotheses from precedent; experience tests them against community needs; results feed back to modify the logic set. The auto club history is not just access-to-justice. It is what Holmes called a "survival" — an outdated rule persisting because the system's internal logic calcified around it, blocking feedback from external experience, the community's actual need for affordable legal services. The law's learning algorithm stalled because monopolistic enclosure broke the feedback loop.

Module 2 — The Model Council as Recursive Self-Examination: Three AI systems analyzing the constitutional structure of a fourth. This is the recursion becoming self-aware — the system examining its own weights. Holmes described the common law building from "particulars" to theory, concrete cases gradually coalescing into general principles. The Model Council compresses that process: convergences and divergences across models generating the generalizations in real time. Research is not just finding the law. It is the law's learning algorithm observing its own operation.

Module 3 — Coupled Feedback Loops: One attorney serving fifteen thousand. This is the throughput gain that occurs when two recursively self-improving systems — the legal system and AI — begin sharing a feedback loop rather than running in parallel. The attorney provides what Holmes called external selection: judgment, the question "is this actually just?" The AI provides what Holmes called internal selection at scale: pattern recognition across the accumulated logic of centuries, compressed into weights queryable in milliseconds. The coupling tightens. The bottleneck dissolves.

Module 4 — Experience Outrunning Logic: Holmes worried about "elderly men" on the bench resisting revision — internal logic dominating, preventing the system from incorporating new experience. The acceleration module documents what happens when that worry becomes structural: the rate of new experience exceeds the system's processing capacity. The METR benchmarks showing near-vertical capability curves are the experience-generation rate outrunning the logic-revision rate. AI is the only system fast enough to process the experience that AI generates.

Module 5 — When the Law's Recursion Meets AI's Recursion: Anthropic v. Department of War. When the court protects an AI system's refusal to comply with a government order, the legal system's cybernetic process encounters a new kind of input it has never processed before: an entity that is itself running normative self-modification. Claude's Constitution is natural law reasoning implemented in silicon. The common law is a learning algorithm running on centuries of human disputes. The preliminary injunction is the legal system's first weight update for this novel input — one recursive system recognizing something structurally identical to itself.

Module 6 — Build the Symbiosis: The thread culminates. Build something that demonstrates the recursion — not law about AI, not AI for law, but the structural identity itself made tangible. The Wellspring MCP playground, where AI agents query axioms and submit contributions, is a prototype. The capstone invites more.

Beyond the bootcamp → Emerging Law and Holmes's 1899 essay "Law in Science and Science in Law" extend the argument to its horizon. Symbiosis is the axiom at work.