AI Compliance · EU Accountants & Finance 18 April 2026

AI Compliance for EU Accountants 2026:
EU AI Act, DORA & GDPR for Finance Professionals

European accountants and finance professionals are operating under a compliance framework for AI that is already partially in force and becomes fully enforceable in 2026. The EU AI Act explicitly names credit scoring and financial risk modelling as high-risk AI applications in Annex III — not a category requiring interpretation, but a direct classification. The Digital Operational Resilience Act (DORA) has applied to financial sector firms since January 2025, creating ICT risk management obligations that cover AI systems. GDPR obligations on client financial data have been enforceable since 2018. This guide covers what EU accountants, auditors, tax advisers, and finance professionals need to have in place across all three frameworks.

This applies to your practice or firm if:
  • You use AI in financial reporting, tax preparation, or bookkeeping automation
  • You use AI in credit assessment, financial risk modelling, or client investment decisions
  • You upload client financial data into AI platforms — including general tools like Copilot, ChatGPT, or Gemini
  • You are subject to DORA as a financial entity (banks, insurance undertakings, investment firms, payment institutions, audit firms serving regulated entities)
  • You serve clients outside the EU — the EU AI Act applies to systems whose outputs are used within the EU regardless of where your firm is based
See our EU AI Act compliance packages →

EU AI Act: credit scoring and financial risk modelling are explicitly high-risk

Annex III of the EU AI Act lists high-risk AI applications with no ambiguity for the financial sector. It explicitly includes: AI systems used to evaluate the creditworthiness of natural persons or establish their credit score, and AI used in financial risk assessment and insurance risk assessment. These are direct classifications, not inferred from general principles.

For EU accountants and finance professionals, this means any AI tool used in these contexts carries mandatory compliance obligations from 2 August 2026:

  • Risk management system: A documented process for identifying, analysing, and mitigating risks posed by the high-risk AI system throughout its lifecycle.
  • Data governance: Training, validation, and testing data must meet quality standards — data used must be relevant, representative, and free from errors that would produce discriminatory outputs.
  • Technical documentation: Maintained before deployment and kept current — covering the system's design, performance characteristics, and intended purpose.
  • Transparency to users: High-risk AI systems must be sufficiently transparent that the professionals using them can interpret outputs and understand their limitations.
  • Human oversight: Measures enabling the professionals deploying the system to understand and oversee its operation, intervene, and override decisions.
  • Accuracy, robustness, and cybersecurity: Documented standards for the system's performance, with monitoring for degradation over time.

Penalties: €15 million or 3% of global annual turnover for high-risk violations. €35 million or 7% of global turnover for prohibited AI practices.

DORA: ICT risk management for financial sector firms

The Digital Operational Resilience Act (Regulation EU 2022/2554) has been in force for financial sector entities since 17 January 2025. DORA applies to a broad range of financial entities — including credit institutions, investment firms, insurance undertakings, payment institutions, and audit firms that provide services to regulated entities.

For AI systems, DORA's ICT risk management framework creates specific obligations:

  • ICT asset inventory: Financial entities must maintain a comprehensive register of all ICT assets, including AI tools and the platforms they run on. "We use it informally" is not a compliant position under DORA.
  • Third-party risk management: AI tools provided by third-party vendors are ICT third-party service providers under DORA. Contracts must include specific provisions on performance, security, auditing rights, and exit provisions.
  • Concentration risk: Where multiple critical functions depend on a single AI provider, DORA's concentration risk provisions require assessment and documentation of the dependency risk.
  • Incident reporting: Significant ICT-related incidents — including AI system failures that cause operational disruption — must be reported to national competent authorities within defined timeframes.

DORA is not a future requirement. Firms that were in scope on 17 January 2025 are already subject to supervision. The European Supervisory Authorities (ESAs) have active oversight programmes.

Need to map your firm's AI tools against the EU AI Act and DORA?

Our EU AI Act compliance packages cover risk classification, technical documentation, data governance review, and DORA-aligned ICT asset registers. Fixed price, built for finance and professional services firms.

See the EU AI Act Packages →

GDPR: client financial data and AI processors

GDPR has applied across the EU since 2018. For accountants and finance professionals, the processing of client financial data through AI tools creates obligations that most practices have not formally documented.

When any AI platform processes personal financial data about clients — tax records, account information, credit details, financial projections — that platform is acting as a data processor. The practice retains full data controller liability. Required documentation:

  • A Data Processing Agreement with every AI vendor, confirming data residency within the EU or an adequacy decision jurisdiction, retention periods, and confirmation that data is not used to train AI models without separate consent
  • A documented lawful basis for each AI processing purpose
  • Data Protection Impact Assessment for high-risk processing — which financial data AI processing typically qualifies as
  • Records of processing activities covering all AI-assisted workflows

Article 22 of GDPR creates a right for individuals not to be subject to solely automated decisions that produce significant effects — including automated credit assessments, financial risk scoring, and AI-driven investment recommendations. A documented human review mechanism is required for these use cases.

GDPR enforcement penalties: €20 million or 4% of global annual turnover, whichever is higher. National data protection authorities across the EU have active enforcement programmes specifically targeting AI-related GDPR violations.

Professional body obligations: IFAC, EFAA, and national institutes

The International Federation of Accountants (IFAC) and the European Federation of Accountants and Auditors for SMEs (EFAA) have both published guidance on AI use in accountancy and audit. National institutes across EU member states — including the ICCA (France), IDW (Germany), NBA (Netherlands), and their equivalents — apply the same principle as professional bodies globally: competence obligations extend to understanding the limitations of AI tools used in client work.

For statutory auditors, the obligation is particularly direct: audit quality standards require that AI-assisted audit procedures are understood by the signing auditor, that outputs are verified against primary evidence, and that the audit file documents how AI was used and how outputs were assessed. An AI tool is not a substitute for professional judgement in audit — it is an input to that judgement, and the documentation must reflect this.

The explainability requirement: MiFID II and client-facing advice

For investment advisers and financial planners operating under MiFID II, AI systems used in client-facing recommendations carry an additional layer of obligation. MiFID II's suitability and appropriateness requirements mean that any AI-assisted recommendation must be explainable to the client — and explainable to the regulator.

Black-box AI models that generate investment recommendations without interpretable reasoning are inconsistent with MiFID II obligations. The professional deploying the tool must understand the basis for its outputs and be able to document how client-specific circumstances were taken into account. This is a fiduciary obligation, not merely a technical one.

What a compliant EU finance professional AI framework requires

  1. AI inventory and risk classification: All AI tools documented, with determinations of which fall under EU AI Act high-risk classification and which are subject to DORA's ICT asset requirements.
  2. DORA-compliant vendor contracts: Third-party AI vendor agreements reviewed and updated to include DORA's required contractual provisions.
  3. GDPR data processing register: All AI processors documented with confirmed DPAs, data residency, and model training opt-outs.
  4. Conformity assessment and technical documentation: Completed for any high-risk AI tools before August 2026.
  5. Human oversight policy: A documented process for review and verification of AI-generated financial outputs before delivery to clients or submission to authorities.

EU AI Act Compliance for Finance Professionals

Risk classification, conformity assessment, DORA vendor review, GDPR data processing register. Fixed price, delivered in five to seven working days. Built for accountants, auditors, and finance firms across the EU.

See the EU AI Act Packages →
Call Now Book a Free Call