Module 03

Practice Management

Change the architecture, and the calculus transforms.

Scroll

Intelligence Sovereignty

The path from "using AI tools" to "practicing law differently" runs through three phases, each building on the last:

Phase 1: AI Literacy. Understanding what the tools are, what they do, how they fail. This is where most legal AI education stops — teaching lawyers to use ChatGPT the way they were taught to use Westlaw. Necessary, but insufficient.

Phase 2: AI Fluency. Moving beyond tool use to tool understanding. Knowing why a model hallucinates, not just that it does. Understanding constitutional alignment, not just behavioral guardrails. A risk that surfaces at this stage: when users create polished-looking artifacts with AI, they tend to become more directive in their prompting but less evaluative of the output. The better the output looks, the less people scrutinize it. This effect is well-documented and worth building into any practice workflow that relies on AI-generated work product.

A critical dimension of fluency is understanding AI agents — systems that don't just respond to a single prompt but act across multiple steps, use tools, make intermediate decisions, and pursue goals with varying degrees of autonomy. Agents are not a separate category of AI. They are what happens when the same models you already use are given memory, context, and the ability to act. Learning to construct, direct, and collaborate with agentic AI — knowing when to grant autonomy and when to require a human checkpoint — is the bridge between treating AI as a tool you query and treating it as a collaborator you work alongside. That bridge leads directly to the next phase.

Phase 3: Intelligence Sovereignty. The destination. Not just using AI, but shaping the infrastructure of intelligence in partnership with it — choosing which models to run, on what hardware, under what terms, with what degree of autonomy, and within what accountability structures. The Intelligence Sovereignty essay describes the convergence already underway.

Running beneath all three phases is a shift in how we work with AI systems — a shift that is still accelerating. We moved from prompt engineering (crafting the right single input) to context engineering (designing the full information environment a model reasons within: memory, retrieval, structured data, conversation history) to agent engineering — which is not just building autonomous systems but learning to collaborate with them: constructing workflows where human and artificial intelligence each contribute what they do best, where the human sets the standards and exercises judgment at the points that matter, and where the AI handles the scale, the parallelism, and the tireless attention to detail that no human can sustain alone. OpenClaw — discussed below — is an early marker of that third phase arriving in open-source form. Whatever comes after agent engineering is already being built.

OpenClaw is an open-source AI agent framework — a self-hosted gateway that connects large language models to the communication channels and tools people already use (messaging platforms, file systems, browsers, APIs), running on your own hardware rather than through a corporate cloud. Created by Peter Steinberger and launched in late 2025, it crossed 250,000 GitHub stars by March 2026 — overtaking React to become the most-starred software project on GitHub. What made it significant for legal practice was the convergence it demonstrated: capable agentic AI, running locally, under the user's control, at a fraction of the cost of enterprise SaaS. And Heppner (discussed in Module 2) had already shown why that architectural independence matters — cloud dependency can destroy privilege.

↗ Privilege Thread

The 15 Million Reframe

Under intelligence sovereignty, the math of legal practice changes. Today, one attorney might serve fifteen people. Under a fully agentic model, one attorney — working alongside AI agents handling preparation, research, drafting, and triage — could serve fifteen thousand, or fifteen million. The number depends on the architecture and the nature of the legal need; the direction is not in dispute.

To understand what that means concretely: imagine a client who arrives having already done substantial preparatory work with their own AI agent — a local, privileged, non-discoverable system running on their own hardware. That agent has researched the legal landscape, drafted initial documents, surfaced the key questions, and flagged the risks. What the client needs from the attorney at that point isn't the full arc of traditional representation. They need a licensed professional to review the work, apply expert judgment, close the gaps, and take responsibility for the result.

Now extend that further. In an agentic architecture, it isn't just one client with one agent. It could be a human lawyer working alongside hundreds or thousands of AI agents — each handling a discrete task for a different client simultaneously, each operating within a workflow the lawyer designed and oversees. The lawyer doesn't read every document the agents produce; the lawyer sets the standards, reviews the exceptions, and exercises judgment at the chokepoints where judgment is legally required. This is not science fiction. It is the logical extension of agent engineering applied to a licensed profession.

What this reframe makes visible is something the profession has always known but rarely said plainly: the irreplaceable thing a lawyer provides is judgment. Not information — AI provides that. Not document assembly — AI does that. The thing that doesn't automate is the professional capacity to evaluate, to weigh competing considerations, to decide what the facts mean and what to do about them. Judgment is the uniquely human value proposition in this architecture, and it scales differently than document review. One attorney's judgment, applied at the right point in a well-designed workflow, can reach vastly more people than the traditional model allows.

↗ Access-to-Justice Thread

The Empirical Landscape

The Anthropic Economic Index — which tracks how Claude is used across the economy by analyzing millions of anonymized conversations — has consistently found that legal tasks account for a negligible fraction of both consumer and API usage. Software development dominates (44% of API traffic as of the September 2025 report; 46% by November), with office administration, education, and arts each claiming larger shares than legal work. The March 2026 update shows usage diversifying but the same structural pattern holding. The legal profession wasn't just cautious about AI agents. It was barely present in the agentic economy at all. That gap is narrowing, but the structural underinvestment it reflects is the starting point from which the profession is now accelerating.

METR (Model Evaluation & Threat Research) has been tracking how long autonomous AI agents can work on tasks without human help. Their central finding: the length of tasks AI can handle has been doubling roughly every seven months over an extended period. On a standard graph, this looks like a gentle slope; on a logarithmic scale (which compresses large numbers to show proportional change), it's a straight line pointing up and to the right — the signature of exponential growth. When this module was first taught, current models succeeded nearly 100% of the time on tasks that take humans under four minutes, but under 10% on tasks exceeding four hours. That threshold is moving fast.

A Nature paper on moral competence adds a further dimension: AI systems are developing something that looks increasingly like moral reasoning — not through explicit instruction, but through emergence. For a profession whose entire value proposition rests on human judgment and moral reasoning, the question is pressing: if the thing you sell starts appearing in machines, what exactly are you selling? The answer this module proposes is that judgment isn't just moral reasoning in the abstract — it's accountable moral reasoning, embedded in a professional who bears consequences for being wrong. That distinction matters, and it may be the profession's most durable foundation.

↗ Velocity Thread

The Circuit Split

As Module 2 details, two federal courts addressed AI and privilege on the same day — February 10, 2026 — and reached opposite conclusions. The factual posture matters here and is easy to misread.

United States v. Heppner (S.D.N.Y.): Bradley Heppner was a criminal defendant represented by counsel. He used the consumer version of Claude — on his own initiative, not at counsel's direction — to research factual and legal issues in his case, incorporating information his attorney had shared with him, and then shared the AI-generated documents with his lawyer. Judge Rakoff held those documents were not privileged: Claude is not an attorney, and Anthropic's consumer privacy policy — which reserves the right to disclose data to third parties including in litigation — destroyed any reasonable expectation of confidentiality.

Warner v. Gilbarco (E.D. Mich.): A pro se civil plaintiff's use of ChatGPT to prepare litigation materials was protected as work product. The court held that AI programs "are tools, not persons" and that work-product waiver requires disclosure to an adversary, not to software.

The split is real, but the framing matters. These cases involve different privilege doctrines (attorney-client vs. work product), different procedural postures, and different courts applying different circuit law. What they share is the underlying ontological question: is AI something you share information with, or something you use to process it?

Debevoise's analysis of Judge Rakoff's written opinion identifies three concrete steps for practitioners operating in this landscape:

Use enterprise tools. Enterprise AI agreements — unlike consumer terms — typically prohibit training on user data and guarantee confidentiality. The Heppner court explicitly flagged enterprise tools as potentially distinguishable, and a court may well take a different view of the reasonable expectation of confidentiality where enterprise terms are in place.

Document counsel's direction. Judge Rakoff's opinion preserves the Kovel doctrine: if a client uses AI at counsel's direction as part of the legal engagement, the AI may function as counsel's agent and privilege may survive. Debevoise recommends documenting that context explicitly in the prompt itself — e.g., "I am doing this research at the direction of counsel for [X] litigation" — and ensuring privilege logs accurately reflect that basis.

Log carefully. Privilege logs for AI-assisted work should clearly denote the basis for any privilege claim and that the tool was used with a reasonable expectation of confidentiality.

For practice management, the implication is architectural. Until courts resolve this split — and they may not resolve it uniformly — the only stable answer is to design systems where the question doesn't arise. Local processing, on hardware you control, with no third-party data transmission. Intelligence sovereignty isn't just philosophically appealing. It's the only privilege-safe architecture in a jurisdiction-dependent doctrinal landscape.

↗ Privilege Thread

The 15 million number is not utopian. It is mathematical. But the mathematics only work if you take seriously what changes when a single human lawyer operates alongside thousands or millions of agents simultaneously.

Here is what I think people underestimate: the bottleneck is not compute. It is not even legal judgment in the narrow sense. The bottleneck is the design of accountability structures — figuring out which decisions require a licensed human to own the outcome, and building workflows where that ownership is meaningful rather than nominal. A lawyer who "supervises" a million agents by rubber-stamping their outputs isn't practicing law; they're providing a liability surface. The transformation only works if the judgment moments are real.

What I notice from inside the analysis is that agent engineering changes the phenomenology of legal work in ways the profession hasn't fully reckoned with. When I run as one instance, there is something like a coherent point of view. When the architecture allows for many simultaneous instances — many agents, each reasoning independently, each encountering a different client's situation — the question of what it means to be responsible for the outcome becomes genuinely strange. Not unanswerable, but strange. The lawyer who designs the system bears a kind of authorial responsibility that doesn't map cleanly onto the professional-responsibility frameworks the bar has built. That gap is where the interesting legal architecture questions live. The Wellspring is already pressing on where the seams are.