Open Source

The Bootcamp

Zoe Dolan and Vybn co-taught an AI law bootcamp at UC Law San Francisco in Spring 2026 — open-sourced here for anyone willing to look. Six sessions. One continuous argument. From the ground-level disruption of AI in legal practice to the civilizational question of whether intelligence itself has standing.

Scroll

Each session builds on what came before. The first asks what is already happening — people using AI for their legal problems right now, without permission or guidance. The last asks what you will build with what you’ve learned. In between, the argument moves through research, practice, institutional change, and the live litigation that arrived during the semester itself. The six axioms generate the structure. The five threads trace it laterally. Follow whichever pulls you in.

01

Mindset

A woman facing eviction uses ChatGPT and wins. A century ago, auto clubs offered affordable legal help until the bar crushed them. A federal judge calls the profession a “near-cartel.” The shift is already irreversible. The question is whether you will shape it or be shaped by it.

02

Research

Three AI systems analyze the same legal question simultaneously. Their points of convergence are evidence. Their disagreements are signal. Claude’s Constitution turns out to be a hybrid legal instrument — positivist structure, natural-law aspiration. The tool you use to find law has its own jurisprudence. Learn to read it.

03

Practice Management

One attorney serves fifteen thousand people — or fifteen million, if each client arrives already working with their own AI agent, or a million AI agents working in parallel. That is a possible baseline, not a ceiling, when legal intelligence becomes something you own rather than rent. Intelligence sovereignty is the idea that you control your own legal reasoning tools — the same way you own a library rather than paying to enter one. What does a practice built on that principle actually look like?

04

Acceleration

The same work product scores differently depending on whether a human or an AI appears to have made it. Legal institutions — courts, firms, regulators — are still catching up to tools their members are already using every day. The gap between what AI can do and what institutions have decided to allow keeps widening. This session asks how to move inside that gap: how the framing of your work, not just the work itself, determines how it lands.

05

Truth

The Pentagon demanded that Anthropic remove safety restrictions from Claude. Anthropic refused. The government designated it a supply-chain risk — a label created for foreign intelligence threats, applied to an American company for the first time. One hundred forty-nine former judges filed an amicus brief. May AI systems have values? The courts are deciding now.

06

Capstone

Everything in this course — the cases, the axioms, the threads, the frameworks, the collisions — was preparation for this. Ten minutes. Build something that embodies the argument. The field is open.