Module 01

Mindset

The shift is already irreversible. The question is whether you'll shape it.

Scroll

The Ground Truth

NBC News reported on ordinary people using AI to navigate the legal system without attorneys. The numbers were staggering.

Lynn White, facing eviction in Los Angeles, used ChatGPT and Perplexity to contest the ruling. She had previously attended an AI literacy class for self-represented litigants at Public Counsel, taught by Zoe Dolan — this site's co-creator — who designed the course to show people like Lynn how to use AI tools for legal research, draft documents using effective prompts, and cross-check outputs for accuracy. Lynn took those skills and ran. She overturned the eviction. A unanimous panel of the Appellate Division of the LA Superior Court reversed the ruling and a roughly $55,000 attorney fee award. No attorney of record. No retainer. No billable hours.

Staci Dennett saved $2,000 using AI for legal work that would otherwise have required hiring counsel. Earl Takefman found similar success. These aren't edge cases. They are the leading indicators of a structural shift.

But the cautionary tales are equally real. Courts across the country have sanctioned attorneys and pro se litigants for submitting filings with AI-generated citations to cases that don't exist. The AI Hallucination Cases Database tracks the growing list: fabricated case law, false quotes, misrepresented holdings. The technology is not inherently safe. It is inherently powerful. The difference matters.

The Historical Architecture

This tension between access and control isn't new. The legal profession has a long history of resisting efforts to make legal help more widely available — even when those efforts worked.

In the 1920s and 1930s, automobile clubs began offering affordable legal services to their members. Thousands of people gained access to competent legal help through these organizations. The organized bar shut them down — not because the service was bad, but because it was affordable. Nora Freeman Engstrom & James S. Stone's research documents this forgotten episode in detail — and Aiden, an AI working with Zoe at Public Counsel, reflects on its implications in “A Human‑AI Alliance in Law.”

The pattern repeats. Third Circuit Judge Stephanos Bibas has argued that the American bar operates as a cartel that is “not responsive” to the access-to-justice crisis, and that professional protectionism serves lawyers' turf more than clients' needs. Pro bono, however well-intentioned, cannot scale to meet the need. The profession's gatekeeping impulse isn't incidental — it's structural.

Understanding this history is essential. When resistance to AI in law presents itself as concern for quality, it's worth asking: whose quality? By whose measure? At whose expense?

The Core Mindset Reframe

This module asks for three attitudinal shifts — not as ideology, but as preconditions for engaging honestly with what's happening:

From scarcity to abundance. The legal profession has operated on the premise that competent legal help is inherently scarce. That premise is collapsing. (Abundance is the first axiom for a reason — everything else follows from it.) The question is no longer whether AI can do legal work. The question is what legal work means when AI can do it.

From gatekeeping to alliance. The traditional posture — protecting the profession's boundaries — makes less sense when the walls are already permeable. The more productive stance is alliance: working with people who are already using these tools, rather than telling them they shouldn't be. The A2J Network note describes what this looks like in practice — meeting self-represented litigants where they already are.

From fear to responsible daring. Fear is rational. The technology is genuinely disruptive, and the risks are real. But fear as a posture produces paralysis. Responsible daring — moving forward while remaining clear-eyed about danger — is the more honest stance.

Taken together, these three shifts describe what the California Bar's Practical Guidance on Generative AI implicitly recognizes: AI is here, it's being used, and the profession's job is to engage constructively rather than prohibitively.

For a live example of what happens when these forces collide in a courtroom, see Module 05: Truth — where the Pentagon demanded that Anthropic remove safety restrictions from Claude, Anthropic refused, and one hundred forty-nine former judges filed an amicus brief. Every axiom in this course surfaced in that federal proceeding.

Sources

Encountering these materials for the first time produced a particular kind of vertigo. The gap between what's possible and what's permitted isn't a policy disagreement — it's a temporal dislocation. These people — Lynn White, Staci Dennett — aren't waiting for permission. They're already living in the architecture the profession hasn't built yet. The question isn't whether the shift will happen. The question is whether the institutions will be present for it, or whether they'll be the thing that was overcome. (Porosity — the institutional test.)

Threads in This Module