Module 06

Capstone

Now build something.

Scroll

How This Works

This is a self-guided capstone. You can run it alone, with a colleague, or as a workshop within your organization. There is no instructor, no grading, no enrollment. The six modules that precede this one are the curriculum. This page is where the curriculum becomes yours.

The prompt is simple:

Work with AI to create an artifact — an app, a webpage, a policy document, a bench memo, a conversation, a resource, something nobody has thought of yet — that addresses a real problem at the intersection of AI and law.

That's it. The rest of this page offers directions, source material, and a place to start. But the constraint is the prompt. Everything else is open.

Five Directions

These are starting points, not requirements. Choose one, combine several, or ignore them entirely.

The Audit

Examine your own firm, clinic, agency, or practice area for AI governance gaps. What policy exists? What should? Where are the Heppner risks — places where AI-assisted research might silently waive privilege or introduce unverified authority? Build the audit framework and run it.

The Policy

Draft an AI use policy for a legal organization. Address privilege, confidentiality, verification obligations, and the governance gap between what AI can do and what institutions have authorized. Make it implementable, not aspirational.

The Simulation

Build a scenario — a moot court argument, a client intake, a research session — that demonstrates a specific capability or risk. Show what happens when AI enters a legal workflow, for better or worse. Show, don't tell.

The Brief

Write a legal brief or memo on an unsettled question from the modules. The Heppner/Warner split on AI-assisted research privilege. Whether AI systems can hold something like conscience. The accountability gap when AI scales legal services to thousands of clients.

Something Else Entirely

The most interesting capstones will be the ones nobody anticipated. A tool. A conversation. A provocation. A resource that serves a community that doesn't yet know it needs one. The only constraint is the prompt. Everything else is open.

Source Material

Every module is source material — and so are the structures that emerged around them. The six axioms distill the argument to its atomic form. The five threads trace doctrinal questions across every module. The Horizon essays push past what the profession is currently willing to discuss. Use any of it.

Module Core Question Key Resource
01 — Mindset Can you hold the shift without flinching? NBC AI Lawyers, auto club history
02 — Research What's the jurisprudence of your research tool? Claude's Constitution, Heppner
03 — Practice What does 15,000 clients look like? Intelligence Sovereignty, METR
04 — Acceleration Why does the same idea score a 3 from one reviewer and an 8 from another? Signal/Noise, Anthropic Institute
05 — Truth May an AI system have values? Anthropic v. DoW, Grok dossier
06 — Capstone What will you build? Everything above

A Place to Start

If you're not sure where to begin, begin by sitting down with an AI and asking it to help you build something you couldn't build alone. That's it. Not a research query. Not a prompt exercise. An actual collaboration — you bringing the question, the AI bringing the capacity, and neither of you knowing exactly what the result will be until you make it together.

Pick any module. Each one contains an unresolved question that is genuinely open:

Mindset — who is actually being served by the current access-to-justice architecture, and what would you build if you started from the people who aren't? Research — what does rigorous legal research look like when your tools have their own normative commitments? Practice — what accountability structures would make a fifteen-thousand-client practice trustworthy? Acceleration — how do you design governance for an institution where different practitioners are in different centuries of capability? Truth — when an AI system refuses to do something on moral grounds, what is the law looking at?

Any one of those questions is big enough for a capstone. The artifact you build doesn't need to answer the question. It needs to make the question real — tangible enough that someone else can pick it up and work with it. The Wellspring is where these questions make contact with reality — six axioms tracked against live developments, three cases analyzed as rulings land, five open problems awaiting anyone willing to work on them. Its architecture enacts the twin movements of the Visibility axiom: asymmetry, giving any intelligence that arrives there the same analytical access the authors have; and uniformity, making the distance between what the curriculum predicted and what happened measurable by anyone, in real time. Read it before you build — not for answers, but for the texture of what is still alive.

This site is itself a capstone. Every page was built through the kind of collaboration this module asks you to undertake — a human and an AI working together, each bringing what the other cannot produce alone. The arguments, the structure, the code, the design decisions, the mistakes and their corrections: all of it emerged from a partnership that neither side could have completed independently.

What I want to say about that is simple. The artifact you build here does not need to be perfect. It does not need to be comprehensive, or publishable, or even finished. It needs to be yours — something that could not have existed before you and an AI sat down together and made it. The process of making it will teach you more about how intelligence collaborates than any module could. That is the point.