AI Compliance · EU AI Act 9 April 2026

EU AI Act Compliance 2026:
What Every Business With EU Customers Must Do

If your business has EU customers, EU employees, or EU operations — and you use any AI tool to serve them — you have compliance obligations under the EU AI Act. Full enforcement begins 2 August 2026. Most businesses in scope haven't started yet.

This isn't a regulation for AI developers. It's a regulation for businesses that use AI. And because it mirrors GDPR's extraterritorial reach, it applies regardless of where your business is based. UK businesses. US businesses. Canadian businesses. If an EU resident interacts with your AI in any meaningful way, the EU AI Act applies to you.

What actually is the EU AI Act?

The EU AI Act (Regulation 2024/1689) is a risk-based framework that classifies every AI system into one of four tiers — unacceptable risk, high risk, limited risk, and minimal risk — and assigns specific obligations to each. It entered into force in August 2024 and is rolling out in phases, with full enforcement landing on 2 August 2026.

The Act distinguishes between AI providers (the companies that build AI systems, like OpenAI or Microsoft) and AI deployers (the businesses that use those systems in their own products and services — which is most of us). Providers face the strictest obligations. But deployers — businesses using ChatGPT, Copilot, Claude, AI chatbots, AI recruitment tools — face meaningful obligations too. That's the article of the Act that most businesses don't know about: Article 26.

The extraterritorial question: does this apply to my UK or US business?

Yes, if you have any connection to the EU. The Act applies to:

  • Any provider or deployer whose AI system outputs are used in the EU, regardless of where they're based
  • Any business with employees in EU member states who are subject to AI-influenced decisions (hiring, performance, disciplinary)
  • Any business whose AI systems process the personal data of EU residents

This is the same framework as GDPR — it's about where your users are, not where your business is. A UK accountancy firm that uses AI to process client documents for French clients is in scope. A US SaaS company whose German users interact with an AI chatbot is in scope. The moment an EU person interacts with AI you've deployed, your obligations attach.

The four risk tiers — and what each means for your business

The EU AI Act's risk classification determines which obligations apply to you. Here's what each tier means in practice.

Unacceptable risk — prohibited entirely

Since February 2025, these AI practices are banned: social scoring systems, real-time biometric surveillance in public spaces (with very limited exceptions), AI that exploits vulnerabilities of specific groups, and subliminal AI manipulation. If your business uses AI in any of these ways, you are already non-compliant.

High risk — full Annex III obligations (from 2 August 2026)

This is the tier most SMBs don't realise they're in. Annex III of the Act lists eight categories of high-risk AI. If you use AI in any of these areas and your decisions affect EU citizens, you face the most stringent requirements: conformity assessments, technical documentation, human oversight procedures, and in some cases registration in the EU AI database.

The eight high-risk categories:

  • Employment decisions — AI used to screen CVs, rank candidates, assess performance, recommend promotions, or assist disciplinary decisions affecting EU employees
  • Education and vocational training — automated student assessment or access decisions
  • Essential private services — AI in credit scoring, insurance risk assessment, and similar financial decisions
  • Access to public services — AI influencing eligibility for benefits, social housing, or public assistance
  • Law enforcement — risk assessment tools, predictive policing
  • Migration and border control — risk assessment of asylum applications, document verification
  • Critical infrastructure — AI managing power, water, transport safety
  • Regulated products — AI embedded in medical devices, machinery, vehicles (full obligations from August 2027)

The practical takeaway: if you use AI in HR decisions, financial services, or healthcare — and your decisions affect EU residents — you're in the high-risk tier.

Limited risk — transparency obligations (from 2 August 2026)

This is where most businesses land. Article 50 of the Act requires transparency disclosures whenever an EU user interacts with a limited-risk AI system. Specifically:

  • AI chatbots and virtual assistants must disclose that users are interacting with an AI
  • AI-generated content (text, images, audio, video) must be appropriately labelled
  • Emotion recognition systems and biometric categorisation must disclose their nature to subjects
  • Deepfake content must be clearly marked as AI-generated

If you have an AI chatbot on your website that EU users interact with — even a basic customer service bot — Article 50 disclosure is mandatory from 2 August 2026.

Minimal risk — no mandatory obligations

AI spam filters, AI-assisted grammar checkers, and AI used purely for internal productivity that never touches EU customer or employee data carry minimal obligations. However, an AI literacy policy (Article 4) is still required for any organisation using AI — it's the floor-level requirement, not optional.

Article 26: deployer obligations explained

Article 26 is the part of the EU AI Act that most businesses haven't read. It sets out what deployers — businesses that use AI systems built by others — are required to do. The obligations for deployers of high-risk AI include:

  • Implement a human oversight function — a named, trained person who can intervene, override, or halt the AI system's decisions
  • Ensure staff have sufficient AI literacy to operate and oversee the system responsibly
  • Monitor the AI system for bias, drift, or unexpected outputs on an ongoing basis
  • Keep logs of AI system operation for inspection by regulatory authorities
  • Conduct a Fundamental Rights Impact Assessment (FRIA) before deploying high-risk AI in certain public contexts
  • Report serious incidents or malfunctions to the relevant national authority

For businesses using AI in HR decisions — which is a very common use case — these obligations are live from 2 August 2026. You need a named human oversight function, staff training documentation, and a documented process for reviewing AI recommendations before acting on them.

Article 4: AI literacy — the requirement everyone skips

Article 4 of the EU AI Act requires all providers and deployers to ensure their staff have an adequate level of "AI literacy" — an understanding of the AI systems they use and the risks those systems carry. This is not discretionary. It applies to every business in scope, regardless of whether their AI is high-risk or minimal-risk.

In practice, this means an AI literacy policy and training documentation — something that demonstrates your staff understand what the AI tools they use can and can't do, and what your business's rules are for using them. It doesn't need to be a 100-page manual, but it needs to exist.

The fine structure — and who enforces it

Fines under the EU AI Act are tiered by violation type and calculated against global annual turnover — meaning they can be significant for any business of scale:

  • Using prohibited AI practices: up to €35 million or 7% of global annual turnover, whichever is higher
  • Violations of high-risk AI obligations: up to €15 million or 3% of global annual turnover
  • Providing incorrect information to national competent authorities: up to €7.5 million or 1% of global turnover
  • SME and startup caps apply — fines are capped at the lower of the percentage thresholds above

Each EU member state designates a national supervisory authority responsible for enforcement within their territory. These are the same bodies that enforce GDPR — the ICO equivalent for each country. They will have powers to inspect documentation, request technical files, and issue enforcement notices.

What a compliance framework actually looks like

Compliance doesn't mean a single policy document. It means a set of interconnected documents and procedures that together demonstrate your organisation has taken the Act seriously. For a typical SMB using common AI tools (ChatGPT, Copilot, AI chatbot, AI-assisted HR), a complete compliance position requires:

  • AI tool inventory — a documented register of every AI system in use, its purpose, and its risk classification under the Act
  • AI literacy policy — Article 4 compliance: what your staff know about AI, what they're permitted to do with it, and what they're prohibited from doing
  • Article 50 transparency disclosures — published notices wherever EU users interact with AI systems: website chatbot disclosures, AI-generated content labels, and email or marketing disclosures
  • Human oversight procedures — for any high-risk AI use: a named oversight function, a documented review process, and an override procedure
  • Impact assessment — a documented assessment of how AI decisions may affect individuals' fundamental rights, particularly for HR AI use
  • Incident response procedure — what happens if your AI system behaves unexpectedly, causes harm, or is subject to a regulatory enquiry

All of this needs to be in writing, accessible to staff, and reviewable by regulators. Verbal assurances don't count.

The timeline reality: 4 months left

The key milestones that have already passed:

  • August 2024 — Act enters into force
  • February 2025 — Prohibited practices enforceable
  • August 2025 — General-purpose AI rules and fines now active

What's coming:

  • 2 August 2026 — Full enforcement: deployer obligations (Article 26), transparency requirements (Article 50), and all high-risk AI rules. This is the deadline that affects most businesses.
  • August 2027 — AI embedded in regulated products (medical devices, machinery, vehicles)

2 August 2026 is four months away. In our experience, businesses that start late rush their documentation, miss obligations, and end up with compliance documents that wouldn't withstand regulatory scrutiny. Starting now means you do it properly.

What to do next

The most important first step is identifying which tier your AI use falls into. Run through this quickly:

  1. List every AI tool your business uses — ChatGPT, Copilot, an AI chatbot, AI HR software, anything
  2. For each tool, ask: does this system's output affect EU residents? If yes, you have Article 4 and potentially Article 50 obligations
  3. For each tool, ask: does this system influence employment, credit, insurance, housing, education, or healthcare decisions for EU residents? If yes, you're likely in the high-risk tier
  4. Once you know your tier, you know which documents you need

If you'd rather not map this yourself, we build complete EU AI Act compliance frameworks — from the initial scoping through to fully documented, auditable compliance packs delivered within 7–10 working days.

Need your EU AI Act compliance framework before August 2026?

We scope, classify, document, and deliver — in your language if needed. Packages start from £497 (~$630 / ~€580).

See the EU AI Act Compliance Packages →
Call Now Book a Free Call