Courts — not regulators, not the ICO, not the EU — handed out $145,000 in AI-related sanctions in the first three months of 2026 alone. These aren't warnings. They aren't investigations. They are financial penalties, already paid, by real professionals who trusted AI outputs without verifying them.
The conversation about AI risk has been dominated by future tense for two years. The EU AI Act will fine you. The ICO could investigate. The Colorado Act becomes enforceable in June. All of that is real — but it obscures something more immediate: courts are sanctioning AI misuse right now, with no notice period and no grace period for early adopters.
What actually happened in Q1 2026
The pattern is consistent across every case: a professional submitted AI-generated content — legal citations, case references, quotations — without checking whether it was real. It wasn't. Courts don't treat that as a software failure. They treat it as professional negligence.
Oregon Court of Appeals — $109,700 in aggregate penalties
Oregon moved fastest and most systematically. Following an initial case in December 2025, the court established a published fee schedule: $500 per false citation, $1,000 per fabricated quotation. That tariff has now been applied repeatedly:
- Gabriel A. Watson (December 2025) — $2,000 for two fake citations and one fabricated quotation.
- William Ghiorso (March 2026) — $10,000 fine for 15 false citations and nine fabricated quotations.
- Couvrette v. Wisnovsky — $15,500+ for 15 AI-generated fake citations and eight false quotations across three separate briefs.
A fee schedule is significant. It means this is no longer discretionary. If you submit a fake citation, you pay $500. Per citation. Automatically. Oregon has industrialised the enforcement.
U.S. Court of Appeals, Sixth Circuit — $30,000 per attorney
In Whiting v. City of Athens (March 2026), the Sixth Circuit fined attorneys Van R. Irion and Russ Egli $30,000 each for submitting appellate briefs containing over two dozen incorrect, misrepresented, or nonexistent citations. The court also ordered additional attorney fees and double costs — meaning the total financial exposure for each attorney substantially exceeded the headline fine.
The finding that should concern every UK professional
Buried in the broader reporting on these cases is a finding from the American Bar Association that maps almost exactly onto UK professional obligations.
ABA Formal Opinion 512 establishes that attorneys using AI must meet standards for competence, confidentiality, and candour toward tribunals — with supervisory obligations identical to overseeing a paralegal. The confidentiality requirement is the one with the widest implications: commercial AI platforms' data handling practices conflict directly with professional confidentiality obligations.
In plain terms: if you're a solicitor, accountant, or mortgage broker using ChatGPT for client work — drafting letters, summarising documents, preparing advice — and that platform processes client data through systems that log, retain, or use that data for training, you may already be in breach of your professional confidentiality obligations. Not in theory. Right now.
The SRA's equivalent framework around confidentiality is not materially different. UK professional bodies haven't published their ChatGPT guidance yet. That won't be a defence when they do.
The judicial AI paradox
A Northwestern University study published in March 2026 found that 61.6% of federal judges use AI tools themselves — primarily for legal research and document review. The same tasks courts are now sanctioning attorneys for performing without adequate oversight.
That's not hypocrisy. It's the difference between institutional and individual accountability. Courts using AI internally have IT governance, oversight frameworks, and verification protocols. Individual attorneys submitting AI output without verification have none of those things. That gap — between having an AI tool and having a governance framework around it — is where the fines live.
The upstream liability question
The most significant case to watch is Nippon Life v. OpenAI, filed in March 2026 in the Northern District of Illinois. The allegation: that ChatGPT constitutes unauthorised practice of law. If that case succeeds — and it's early — it frames potential liability extending from individual professionals up to the AI tool creators themselves.
That matters for businesses using AI tools in professional contexts because it signals a direction of travel: regulators and courts are actively mapping responsibility across the entire chain, not just to the person who pressed send.
What this means for UK businesses right now
These are US cases. UK courts don't operate a $500-per-citation fee schedule. But the professional obligations are structurally the same, and the direction of regulatory travel is identical. The ICO has been publishing AI accountability guidance throughout 2025 and 2026. The SRA is watching. ICAEW is watching.
More immediately: if you're in a regulated profession and you're using AI tools with client data, your professional indemnity insurer is also watching. The question of whether an AI-related incident is covered under your existing policy is one most businesses haven't asked yet.
Three things are worth doing right now:
- Audit what AI tools your team is actually using. Not what you've approved — what's actually in use. Shadow AI in a professional services context is a professional indemnity exposure, not just an IT headache.
- Check your professional indemnity policy wording. Ask your insurer specifically whether AI-generated output that turns out to be incorrect is covered, and whether using unauthorised AI tools affects your cover.
- Build a verification obligation into any AI workflow. The US courts aren't punishing AI use. They're punishing unverified AI use. The distinction matters. A documented process — AI drafts, human verifies, human signs off — is a meaningful defence. No process is not.
The gap that costs professionals money
Every case in Q1 2026 shares the same root cause: a professional used an AI tool in a high-stakes context with no governance framework around it. No verification step. No oversight protocol. No documentation of the process. Just AI output, submitted as if it were professionally produced work.
That gap — between having access to an AI tool and having a framework for using it responsibly — is exactly what the enforcement wave is targeting. And it's exactly what a managed AI compliance framework closes.
The professionals paying $30,000 fines in Q1 2026 weren't acting in bad faith. They were using the tools available to them. They just didn't build the verification layer that would have caught the errors before submission. That layer doesn't have to be expensive or complicated. It just has to exist.
Find out where your business stands
We offer a free initial conversation to assess your current AI exposure — including Shadow AI, professional confidentiality obligations, and insurer requirements. No jargon. No hard sell. Just clarity on what you actually need.