A couple quotes that caught my eye today:

“We are dangerously confusing computational speed with legal reasoning. While AI can be extraordinarily useful, it cannot replace the reasoning of educated, experienced lawyers. The fact is that AI only simulates reasoning. And that simulation often returns content that is polished, persuasive…and fabricated.” ~ Kathy S. 2025-12

And…

“In a scenario where a customer takes full personal ownership of a dispute or grievance and cross-references a corporate legal notice using multiple AI models, they may identify serious, indefensible vulnerabilities. If these findings are used to challenge the notice, formally through documented means, the evidence may prove misconduct or even contempt of court if the entity’s actions violate judicial standards. In such a case, where any further legal action by the entity serves only as additional evidence of their misconduct, the question arises: is a lawyer still necessary? If the entity is forced into a position where they must justify their potential contempt alongside the original case, the customer’s evidence-based approach may redefine the need for traditional legal representation. ~ Deepak G. 2025-12

On the supply side

Right now, AI – if well managed, fine tuned, and given the right knowledge base (thinking hybrid RAG system) can save current staff (paralegals and attorneys) a huge amount of time.

Two potential logical progressions over the next year (yes that fast and it’s already happening) are:
(1) Some firms reduce staff.
(2) Some firms just get more done per person, in the name of competition and even keeping their heads above the rapidly rising waterline. See “On the demand side” below.

I’m thinking the more radical changes will begin in 1-2 years, depending again on how forward-thinking individual firms are, as well – of course, on how fast AI continues to develop.

Some firms will realize this and make the difficult change to a value based model rather than a billable hours model that came with the pyramid structure. Others will cling to the idea that a law practice is not the same as any other business that is subject to changes in market forces.

It’s difficult for most humans to envision the compounding geometric gains in capability that are already happening with AI.

Think: AI more capable –> AI engineers using AI as tool –> AI engineers building more capable AI –> which works better for AI engineers as they build more capable AI –> and so on.

Back to the forward-thinking early adopters. Some companies let experienced workers go before their AI replacements were ready or before anyone at those companies knew how to manage their new electronic employees. So yeah, many early pioneers spent a lot of money learning what not to do, some even becoming completely disillusioned.

[nerd talk warning] They did not use experienced-enough engineers to manage/use the AIs properly (establishing project framework, chunking and distributing the work as atomic tasks, strict instructions/guidance for the underlying LLMs to abide by principles like modularity, naming conventions, etc.).

And that’s only the supply side.

 

On the demand side

On the demand side, with the same rapid progression in AI capability and knowledge, why pay the fees and deal with the messy vagaries of human legal professionals?

While the basic structure of LLMs is the same as 4 years ago, the training methods, guardrails, context windows, and frameworks have all improved.

I remember when a context window (an AI’s working memory) was 128K. Now many are 10x that.

By “frameworks” I mean the massive amount of software and even ecosystems that have grown up around AIs. Think agents, coding assistants, automated workflows, and much more!

Brief sidetrack into Nerdia

And there are the tens or hundreds of thousands of software developers like myself. The tinkerers, experimenters, early adopters, pioneers. We wondered and wonder, “How much can these new tools help?”

I’ll skip over my 2018-2021 experience with AI, as it gets too far into nerdy details for this article.

Wayyy back in the dark ages of AI, 2023 [wow was it that recent!?], I started playing with a beta of  the Github Copilot extension for VS Code, I deplored the code it wrote and how unaware it was of the surrounding code-base, overall app structure/intent, hallucinations, etc. “Ick” and sadness. Uninstall.

Meanwhile, Chat GPT through OpenAI’s web site was useful for me as a partner in brainstorming. Fast forward through more nerdy details about how models radidly improved and competing coding assistant VS Code extensions like CLine, Roo Code, et al, came out. I played with many and – even in the early days – they probably x3’d my productivity after accounting for many reverts. I learned hard lessons, blah blah blah.

Back on track

LLMs are improving, including new paradigms/structures for how LLMs work at a fundamental level arise and an increasing number of people are learning how to safely/effectively integrate AI into current/legacy workflows/software and/or replace those legacy ways.

How does this affect the domain of attorneys?

[Really, most domains now/soon and later, all domains]

I think the writing is on the wall. One of the quotes at the start of this article says it all.

“In a scenario where a customer takes full personal ownership of a dispute or grievance and cross-references a corporate legal notice using multiple AI models, they may identify serious, indefensible vulnerabilities. If these findings are used to challenge the notice, formally through documented means, the evidence may prove misconduct or even contempt of court if the entity’s actions violate judicial standards. In such a case, where any further legal action by the entity serves only as additional evidence of their misconduct, the question arises: is a lawyer still necessary? If the entity is forced into a position where they must justify their potential contempt alongside the original case, the customer’s evidence-based approach may redefine the need for traditional legal representation. ~ Deepak G. 2025-12

Watch the animated musical story / prediction of where things could be going and how we get there. Trigger warning: It starts out dark.

A music video of the future by Scott Howard Swain