AI Compliance · Legal Professionals 18 April 2026

AI Compliance for UK Solicitors 2026:
SRA Obligations, Client Data & EU AI Act

The Solicitors Regulation Authority has not published a separate AI policy — and that is precisely the problem. It means the existing rules on competence, confidentiality, and supervision already apply to every AI tool your firm uses, right now, without any grace period. Courts are already issuing sanctions for AI-generated errors in submitted documents. The EU AI Act classifies certain legal AI as high-risk from August 2026. And UK GDPR creates obligations around client data that most firms are not meeting when they feed documents into third-party AI tools. This guide covers what is actually required of UK solicitors using AI in 2026.

This applies to your firm if:
  • You use any AI tool for document drafting, contract review, legal research, or client-facing communications
  • You upload client documents, correspondence, or case notes into AI platforms (including general tools like ChatGPT, Copilot, or Gemini)
  • You use AI to screen candidates, manage employees, or process HR data
  • You serve EU-based clients — the EU AI Act applies to you regardless of where your firm is based
See our AI compliance packages for professional services →

The SRA has not created new rules — it has applied the existing ones

A common misconception among solicitors is that AI compliance is something the SRA is still working out. It is not. The SRA published its position clearly: existing professional obligations — competence, confidentiality, supervision, and candour — apply to the use of AI in legal practice. There is no separate AI rulebook. The existing rulebook already covers it.

This matters because it means compliance is not optional or aspirational. It is already being enforced through the existing disciplinary framework. A firm that submits AI-generated work without adequate review, or that feeds privileged client data into an unsecured AI tool, is already in breach — not potentially in breach.

The SRA Competence Statement and AI

The SRA's Competence Statement requires solicitors to maintain the skills, knowledge, and attributes relevant to their role. The SRA has confirmed this includes understanding the limitations and mechanisms of the AI tools used in legal workflows.

In practice, this creates three requirements that most firms are not fully meeting:

  • Understanding output limitations: Solicitors must understand that AI tools can hallucinate — generating plausible but incorrect case citations, statutory provisions, or legal arguments. Using output without verification is not competent practice.
  • Supervision of AI-assisted work: The SRA Code of Conduct requires firms to have effective systems for supervising work. AI-generated documents are not self-supervising. Firms need documented review processes, not an informal assumption that the output is correct.
  • Client disclosure: Clients have a legitimate interest in knowing if AI was used in a material way in their matter. The SRA has not mandated disclosure in all cases, but the duty of candour and the client care obligations both point in the same direction — transparency is the safer position.

Not sure where your firm stands on AI governance?

Our AI Compliance Foundation package gives you a documented AI policy, data processing review, and staff guidance — built specifically for professional services firms. Fixed price, delivered in five to seven working days.

See the Compliance Packages →

Client confidentiality and the LLM problem

This is the most operationally urgent issue for most law firms using AI in 2026, and it is one that many partners have not fully considered.

When a solicitor uploads client documents into a general-purpose AI tool — ChatGPT, Microsoft Copilot, Google Gemini, or any similar platform — the question is not merely about whether the tool is helpful. The question is whether the processing of that data complies with:

  • The solicitor's duty of confidentiality to the client
  • UK GDPR obligations on lawful basis, purpose limitation, and data processor agreements
  • Attorney-client privilege, where relevant
  • The SRA's obligation to protect client information

Under UK GDPR, if your firm uses a third-party AI tool to process personal data about clients or opposing parties, that tool is a data processor. You are required to have a Data Processing Agreement (DPA) in place with the provider. You must confirm where data is stored, how long it is retained, and whether it is used to train the AI model. Many standard consumer and SMB AI tool subscriptions do not provide the protections needed for legal-matter data.

The higher-risk scenario is privilege. If client communications or legally privileged documents are uploaded to a third-party AI platform without adequate contractual protections, there is a credible argument that privilege could be waived — not because of anything the client did, but because of the firm's processing decisions. This is not a theoretical risk. It is already being litigated in US jurisdictions and will reach UK courts.

Court sanctions: the precedent already being set

Courts in the United States, Canada, and the United Kingdom have already issued sanctions — financial penalties and professional censure — against legal professionals who submitted AI-generated filings containing fabricated case citations. In Q1 2026 alone, courts issued over $145,000 in AI-related penalties to legal professionals.

The pattern in every sanctioned case is the same: a legal professional used an AI tool, did not verify the output against primary sources, and submitted work that cited cases that did not exist. Courts have been unambiguous that the responsibility lies with the practitioner, not the tool. "The AI hallucinated" is not a mitigation — it is an admission that the practitioner failed to exercise the supervision their competence obligations required.

For UK solicitors, the relevant standard is the same as for any other supervised work. If a trainee submitted a research memo citing non-existent cases, you would catch it in review. AI output requires the same scrutiny — and firms need documented processes to make sure that scrutiny is actually happening, not just assumed.

The EU AI Act and UK law firms

The EU AI Act's high-risk AI obligations come into full force on 2 August 2026. The Act applies to any organisation whose AI systems are used in the EU — regardless of where that organisation is based. UK law firms with EU clients, EU-registered counterparties, or EU-based employees are in scope.

Specific legal AI applications that may fall under high-risk classification include:

  • AI systems used in employment decisions within the firm (screening, performance assessment, promotion)
  • AI used in access to justice contexts — tools that influence whether or how legal services are delivered to individuals
  • AI used in document review processes where outputs influence significant legal outcomes for individuals

High-risk classification under the EU AI Act requires conformity assessment, registration in the EU AI database, post-market monitoring, and human oversight mechanisms. Violations carry penalties of up to €15 million or 3% of global turnover.

Even where specific legal AI tools do not meet the threshold for high-risk classification, the Act's transparency obligations still apply to any AI that interacts directly with individuals — including AI-assisted client communications and chatbots.

UK GDPR obligations that legal AI creates

Article 22 of UK GDPR gives individuals the right not to be subject to decisions made solely by automated processing where those decisions produce legal or similarly significant effects. For law firms, this is directly relevant to any automated matter assessment, client risk scoring, or automated document routing that influences case strategy or client access to services.

Beyond Article 22, the ICO's AI Auditing Framework covers six areas that apply to any organisation processing personal data using AI: governance, transparency, data quality, accuracy, security, and human oversight. A law firm that cannot demonstrate adequate controls in each of these areas is exposed to ICO investigation and enforcement action under UK GDPR — penalties up to £17.5 million or 4% of global annual turnover.

A Data Protection Impact Assessment (DPIA) is required before deploying any high-risk AI processing of personal data. Most law firms using AI in matter management, client onboarding, or document processing will need DPIAs they do not currently have.

What a compliant legal AI framework looks like

The good news is that compliance is not a multi-year project for most small and medium law firms. It is a documented framework that covers the four areas regulators and courts are actually looking at:

  1. AI Inventory: A documented list of every AI tool used in the firm, what data it processes, where that data is stored, and what contractual protections are in place with each provider.
  2. Review and Supervision Policy: A written policy specifying how AI-generated output is reviewed before use in client matters, filings, or advice. This includes who is responsible for verification, what sources must be checked, and how errors are escalated.
  3. Data Processing Agreements: A confirmed DPA with every AI tool that processes personal data, with written confirmation of data residency, retention limits, and model training exclusions.
  4. Client Transparency: A defined position on when and how clients are informed of AI use in their matters, consistent with the firm's duty of candour and client care obligations.

Firms that already have ISO 27001 certification — or have completed Cyber Essentials Plus — can achieve ISO 42001 (the AI Management System standard) up to 40% faster, because the frameworks share a common structure. For security-mature firms, AI compliance is an extension of what is already in place.

Getting this done

The SRA will not introduce a separate AI compliance regime before the existing rules are enforced. Courts are already sanctioning legal professionals for AI-related failures. The ICO is auditing AI processing under UK GDPR. The EU AI Act is weeks from full high-risk enforcement.

Most UK law firms — particularly small and medium practices — do not have a documented AI policy, a data processing register for their AI tools, or a supervision framework for AI-generated work. That is the compliance gap, and it is straightforward to close.

AI Compliance for Legal Professionals

We build documented AI compliance frameworks for UK professional services firms — including solicitors' practices. AI inventory, DPA review, supervision policy, and DPIA. Fixed price, delivered in five to seven working days.

See the Compliance Packages →
Call Now Book a Free Call