A Valley Viewpoint Narrative
Brave new world, folks.
England’s legal system—older than the Magna Carta itself, shaped by more than a thousand years of precedent, ritual, and human judgment—has taken a cautious step into the age of artificial intelligence.
Last month, the Courts and Tribunals Judiciary formally acknowledged what institutions across the democratic world are quietly confronting: AI is here, and it is already knocking on the courtroom door. Judges in England and Wales have now been given permission to use artificial intelligence as a limited drafting aid when producing written opinions.
But the embrace is narrow—and intentionally so.
The guidance draws a hard line between assistance and authority. AI may help with grammar, structure, and clarity, but it is expressly barred from legal research, factual analysis, or substantive reasoning. The reason is simple and unsettling: AI systems can fabricate case law, invent facts, reinforce hidden bias, and present confident-sounding falsehoods that are indistinguishable from truth unless carefully checked.
In other words, AI may help format the judgment—but it may not think it.
That caution was underscored by Geoffrey Vos, the second-highest ranking judge in England and Wales:
“Judges do not need to shun the careful use of AI, but they must ensure that they protect confidence and take full personal responsibility for everything they produce.”
That phrase—protect confidence—is the quiet center of gravity here.
Courts do not survive on efficiency alone. They survive on trust. The authority of a ruling rests on the belief that a human being weighed the facts, interpreted the law, and exercised judgment shaped by experience, conscience, and accountability. Once the public begins to wonder whether a machine had a hand on the scales, the legitimacy of the system itself starts to erode.
America Is Already There—Without the Guardrails
Across the Atlantic, American courts are confronting the same reality—only without a single rulebook.
In the United States, there is no national judiciary issuing unified guidance. Instead, judges are learning in real time, often the hard way. Lawyers have been sanctioned for filing AI-generated briefs that cited entirely fictitious cases, attributed opinions to judges who never wrote them, and relied on legal authority that simply did not exist.
The reaction has been swift but fragmented. Some federal judges now require attorneys to certify that no AI-generated legal research was used without human verification. Others demand disclosure whenever AI tools play a role in drafting. Ethics committees across states have begun issuing warnings: reliance on unverified AI may violate professional responsibility rules.
Within the United States federal courts, the principle is becoming unavoidable: AI can assist—but it cannot be accountable. No algorithm signs an order. No chatbot answers on appeal. No machine bears ethical responsibility when justice goes wrong.
And yet the pressure is real. Dockets are crowded. Clerks are stretched thin. Judges are human. The temptation to use AI quietly—for summaries, boilerplate, or routine language—is obvious.
That is what makes this moment so fragile.
England has chosen caution and clarity. America is inching toward the same conclusion through missteps, sanctions, and public embarrassment. Different systems. Same anxiety.
Because once justice begins to sound automated—even if it isn’t—the rule of law itself starts to feel provisional.
The wigs and robes may be fading symbols.
But the principle beneath them remains unchanged:
Justice must be rendered by humans.
Owned by humans.
And trusted by humans.
Anything less—and “Can justice be drafted by a machine?” stops being a headline and starts becoming a warning.