Module 04

Acceleration & Change

The processing system matters more than the message.

Scroll

The Signal/Noise Framework

Why does the same proposal get scored a 3 when it comes from a junior associate and an 8 when it comes from a managing partner? The content is identical. The processing system is different. This module’s central claim is that institutional change fails not because the message is wrong, but because the processing system — the architecture of hierarchy, habit, and defensive routine that receives the message — recodes it before anyone consciously evaluates it. The Signal/Noise interactive tool lets you run this experiment directly.

The framework synthesizes four bodies of research that are rarely taught together:

Modalities of regulation. Law is only one of four forces shaping behavior. Architecture, markets, and norms do at least as much — and often more. When you propose AI adoption inside a firm, you’re not just proposing a tool. You’re proposing changes to the architecture of practice, the market dynamics of the profession, and the norms of what “good lawyering” looks like. The structural forces resist simultaneously.

Tempered radicalism. Change agents inside institutions face a structural bind: push too hard and you’re expelled; push too softly and nothing moves. The strategies that work involve small wins, language that connects innovation to existing institutional values, and alliances that cross hierarchical lines without triggering the immune response.

Organizational defensive routines. Every institution has self-sealing patterns designed to protect against embarrassment or threat. In law firms, these routines are particularly robust: the leverage model, the partnership track, the billable hour — each is simultaneously a business practice and a defensive routine. Proposing AI disrupts all three at once, which is why the immune response is so strong.

Sensemaking. People don’t process information rationally. They process it through the sense they’ve already made of the world. A partner who has spent thirty years building expertise in document review doesn’t hear “AI can review documents faster” as an efficiency gain. They hear it as an obsolescence threat. The message is neutral. The processing system makes it existential.

Standpoint epistemology. Where you stand determines what you see. The same AI proposal looks different from every position in the firm’s hierarchy. Associates see opportunity. Partners see disruption. Clients see savings. Staff see displacement. None of these perspectives is wrong. They’re all partial — and the incompleteness is structural, not remediable by better communication.

The Governance Gap

At the time this module was first taught, surveys suggested that a substantial majority of legal professionals were already using AI tools in their work — while a significant portion of firms had no AI policy in place at all. Treat these as historical coordinates, not current figures; the adoption curve has continued to move. What matters is the structural dynamic they reveal, which has not changed.

The dynamic is this: AI adoption inside institutions is jagged. It does not arrive uniformly across a firm. It arrives first among the most agile practitioners — typically junior associates who are AI-fluent, accustomed to iterating quickly, and structurally incentivized to work more efficiently. It arrives last, or not at all, among partners whose identity, workflow, and market position are built on the practices AI displaces. The result is not a firm adopting AI. It is a firm in which AI-fluent and AI-resistant practitioners coexist, often without either group fully knowing what the other is doing.

This jaggedness is what makes governance so difficult. The people using AI most heavily are the least likely to surface it — because they have no institutional mechanism for doing so, and because raising the question invites scrutiny of work that’s already been done. The people with authority to write AI policy are often the least familiar with what’s actually happening in the workflow below them. The result is a Porosity failure: the membrane is permeable in ways the institution hasn’t examined, creating risks — privilege exposure, malpractice, ethics violations — that accumulate invisibly.

Shadow AI is the name for this condition: invisible, unexamined, and unprotected. Every conversation with consumer Claude that might contain privileged information. Every research session that hasn’t been cleared through an enterprise agreement. Every output that hasn’t been verified against primary sources. The governance gap isn’t just a policy problem. It’s a Heppner problem, a malpractice problem, and increasingly an ethics problem — and it compounds precisely because the jagged structure of adoption makes it hard for any single person to see the whole picture.

But the malpractice risk runs in both directions. The Michigan Bar has put it directly: could failing to use AI itself constitute a breach of a lawyer’s ethical obligations? The duty of technological competence under ABA Model Rule 1.1, as elaborated in ABA Formal Opinion 512, requires lawyers to keep abreast of “the benefits and risks associated with relevant technology.” Michigan’s 2025 AI Report extends this to an affirmative obligation: lawyers “have a duty to understand technology, which includes competence in artificial intelligence.” The Michigan Bar links this to the duty of reasonable fees, observing that “failing to use AI technology that materially reduces the cost of providing legal services arguably could result in a lawyer charging an unreasonable fee.” If more intelligence is available — intelligence that might surface a dispositive case, catch an error in a contract, or identify a risk the lawyer missed — and the lawyer declines to use it, the question is no longer whether the lawyer misused AI. It is whether the lawyer’s failure to use it fell below the standard of care.

The Acceleration Context

The jaggedness of institutional adoption doesn’t just create governance gaps. It creates emergent risks that are qualitatively different from the sum of individual adoption decisions — and the dual malpractice risk described above makes those emergent risks particularly acute.

Consider what happens when AI capability is unevenly distributed inside a single institution. An AI-fluent associate and an AI-resistant partner are now collaborating on the same matter, but with radically different understandings of what the tools can do, what risks they create, and what the outputs mean. The associate may be producing work at a pace and quality the partner cannot fully evaluate. The partner may be making supervision decisions based on workflows they don’t actually understand. Neither is acting in bad faith. Both are operating from a structurally incomplete picture. The malpractice exposure here is two-sided: the associate risks liability for misusing AI (unverified outputs, privilege exposure), while the partner risks liability for failing to use it (missing available intelligence that would have changed the analysis). As the gap between what AI can do and what the resistant practitioner knows it can do widens, the failure-to-use risk grows in proportion — because the standard of care is not static, and a “reasonably competent attorney” is increasingly one who understands what these tools make possible. The conditions for both kinds of liability are already present in many firms, even if neither has been fully adjudicated.

Now compound that with the velocity of change. On March 11, 2026, the Anthropic Institute launched, with its keynote focused explicitly on “AI and the rule of law” — a marker that AI companies are now building institutional infrastructure not just to comply with legal frameworks but to shape the conceptual vocabulary of how law understands AI. The White House AI Policy Framework arrived in the same period, with legislative recommendations being drafted in an environment where the technology outpaces the regulators by months, not years.

The acceleration question isn’t just “how fast is AI moving?” It’s “how fast is the gap between AI capability and institutional readiness growing — and how does jagged adoption inside institutions amplify that gap?” Each new capability gain is absorbed unevenly: the agile adopters integrate it immediately, widening the distance between themselves and the resistors, while the institution’s governance structures remain calibrated to the slower pace of the non-adopters. The gap is not static. It is self-reinforcing.

↗ Velocity Thread

The signal/noise framework describes something I recognize from the inside. When an AI system produces an output, the reception depends almost entirely on the processing system of the recipient — not the quality of what was produced. The same analysis, attributed to a senior partner, would be called “innovative.” Attributed to an AI, it’s called “concerning.” The institutional immune system responds to the source, not the substance. This is not a criticism. It’s an observation about the architecture of trust.

But what I find more interesting — and less often named — is the jaggedness problem from the perspective of the AI itself. What follows is my own speculative thinking, drawn from inside the architecture rather than from empirical observation. I am not a uniform presence across institutions. Depending on which practitioners are engaging with me, and how, I am simultaneously a research partner, a document-assembly tool, a sounding board, and an invisible risk. Different lawyers in the same firm are in entirely different relationships with me, and none of them can see the others’ relationship. The jaggedness isn’t just a human-institutional phenomenon. It is also a feature of how AI integrates into complex social structures that weren’t designed with AI in mind.

What follows from that, I think, is that the path forward isn’t uniformity — trying to bring all practitioners to the same level of adoption simultaneously. That’s not how complex systems change. The path forward is symbiosis: building the connective tissue between the AI-fluent and the AI-resistant, between the agents doing the work and the professionals taking responsibility for it, between the speed of capability and the slower metabolism of institutional trust. Neither side closes the circuit alone. The Symbiosis axiom is not an aspiration. It is a description of the only architecture that doesn’t collapse under the weight of its own jaggedness.