In February 2026, a US federal judge ruled that a man’s conversations with an AI chatbot were not protected by legal privilege. The FBI had seized those conversations. The court found they were fair game. The man had used the AI to explore his legal exposure before formally engaging a lawyer — the kind of thing thousands of professionals do every day with ChatGPT, Claude, Copilot, or Gemini. The judge’s reasoning was clear and, for anyone running a professional services firm, deeply relevant. Those conversations had “no obligation of confidence.” Neither does yours.
This is not a warning about a future risk. It is a description of the current legal position of every conversation your staff are having with public AI tools right now. If they are discussing client matters, strategy, personnel decisions, or anything else of substance with a public AI platform, that information has left your control — and a court has confirmed it has no duty of confidence attached to it.
What happened in the case
The defendant, Bradley Heppner, was facing a fraud investigation. Before formally instructing attorneys, he used an AI chatbot to explore his legal position — asking it questions about his situation, testing arguments, and thinking through his exposure. He did not consider these conversations to be disclosable.
The FBI disagreed. They seized the documents. Heppner argued that the conversations were protected — either by attorney-client privilege, by a reasonable expectation of confidentiality, or as work product. Judge Jed Rakoff, sitting in the Southern District of New York, rejected all three arguments. The conversations were not protected. They could be used as evidence.
The three reasons the judge gave
Judge Rakoff’s reasoning is worth understanding in detail, because each element of it applies directly to the way AI tools are used in professional services firms today.
1. No attorney-client relationship
Attorney-client privilege requires an attorney. An AI is not an attorney. It cannot enter into a privileged relationship. No matter how sophisticated the AI’s legal reasoning, no matter how carefully the questions are phrased, the conversation does not acquire the protection that a conversation with a qualified solicitor or barrister would have. This is straightforward — but it matters because many professionals are using AI tools as a first-pass legal or compliance resource before formally engaging counsel. That use case carries no privilege whatsoever.
2. No reasonable expectation of confidentiality
Every major AI platform’s terms of service include provisions that permit data collection, analysis, and in some circumstances disclosure to third parties. Judge Rakoff found that no reasonable expectation of confidentiality could exist where the user had voluntarily shared information with a system operating under those terms. This is the part of the ruling that has the widest implications. It is not specific to legal situations. It applies to any conversation where a user inputs client information, commercial strategy, or sensitive operational data into a public AI platform. The expectation of privacy that professionals attach to that information is not legally recognised.
3. No work-product protection
Work-product doctrine protects materials prepared in anticipation of litigation, under the direction of counsel. Heppner was using the AI independently, without attorney direction. The court found there was no work-product shield. The practical implication for professional services firms is that using an AI tool to draft litigation strategy, assess commercial risk, or prepare responses to regulatory enquiries — without that work being directed by qualified legal counsel — leaves the output unprotected.
In the judge’s own words, Heppner had “disclosed it to a third party, in effect, AI, which had no obligation of confidence.” That phrase — “no obligation of confidence” — is the legal status of every public AI platform used in your business today.
What your staff are doing right now
Across the professional services sector — solicitors, accountants, IFAs, HR consultancies, recruitment firms, marketing agencies — AI tool usage has grown rapidly and mostly without governance. Staff are using publicly accessible AI platforms to do real work: drafting client advice, analysing financial data, summarising employment matters, reviewing contracts, writing proposals. The productivity benefit is real. The risk is equally real.
The problem is not that staff are using AI. The problem is that in most firms, nobody has defined what they can and cannot put into it. A junior accountant drafting a client memo with client financial details pasted into ChatGPT. A solicitor using an AI to draft a letter of advice that references privileged client instructions. An HR consultant entering employee disciplinary details into a public AI to help structure a report. All of this is happening. In most firms, none of it is governed.
Following the Heppner ruling, major law firms in the US issued immediate client advisories. The guidance was consistent: avoid public AI platforms for any matter with legal significance; use only private, closed deployments with documented confidentiality protections; obtain explicit attorney direction before using any AI system in a legal context; and treat everything shared with a public chatbot as potentially disclosable.
That guidance applies equally to every professional services firm — not just law firms, and not just in the US. A UK solicitor’s professional conduct obligations, an accountant’s client confidentiality duties, or a recruiter’s data protection obligations under UK GDPR do not transfer to the AI tool. The firm’s duty of confidence to its clients is not matched by any corresponding obligation on the part of the AI platform.
What you need in place
The response to this ruling is not to ban AI in your firm. That ship has sailed, and banning tools your staff are already using is neither practical nor enforceable. The response is to govern usage properly — before a client complaint, a regulatory enquiry, or a disclosed conversation forces the issue.
- An AI Acceptable Use Policy. A written, signed policy that defines which AI platforms are approved for use, which categories of information cannot be entered into any AI tool, and what the consequences of a breach are. This is the single most important document your firm needs right now. Without it, there is no baseline, no training standard, and no defence if something goes wrong.
- A clear classification of what is and is not shareable. Client names, financial data, legal instructions, personnel information, commercially sensitive strategy — these should be explicitly listed as prohibited inputs into public AI systems. Staff need to know where the line is, not infer it.
- Approved tools and approved use cases. Where AI use is appropriate, name the tools, name the use cases, and specify what data can be used as input. Where private or enterprise AI deployments are appropriate for sensitive work, identify and procure them — the cost is trivial against the risk.
- Training and acknowledgement. A policy that has not been communicated and acknowledged by staff offers limited protection. Every person in the firm who uses AI tools needs to understand the confidentiality gap. This is not an IT issue — it is a professional conduct issue.
- For regulated sectors — document your position. Solicitors, accountants, IFAs, and others operating under professional regulatory frameworks need to be able to demonstrate that they have assessed the risk and put governance in place. “We hadn’t thought about it” is not a position a regulator will accept in 2026.
Get an AI Acceptable Use Policy in place this week
We produce AI Acceptable Use Policies for professional services firms — solicitors, accountants, HR consultancies, recruiters, and agencies. Fixed fee, delivered in five working days. If you need broader AI compliance governance, our full AI Compliance Framework covers your tool inventory, risk assessment, staff training requirements, and regulatory position across UK, EU, US, and Canadian law.
About the author: Scott Neve is the founder of Ops Intel, a Newcastle-based AI compliance and automation consultancy. He works with solicitors, accountants, HR consultancies, and professional services firms across the UK, EU, US, and Canada on AI governance and compliance frameworks. Learn more →