Agentic AI in EU Financial Services Compliance: EU AI Act, DORA, and What the Tools Must Deliver

·16 min read
FR

FinancialRegulations.EU Team

Regulatory Intelligence

AI
EU-AI-Act
DORA
compliance-technology
RegTech
agentic-AI

Agentic AI is arriving in EU financial services compliance faster than most regulators anticipated. CUBE acquired 4CRisk in February 2026 specifically for its agentic AI capabilities. Corlytics has built autonomous regulatory change detection. Boutique RegTech firms are positioning agentic workflows — multi-step, autonomous, self-directed — as the future of regulatory intelligence.

For compliance officers and legal teams, this creates two distinct questions. First: how do agentic AI compliance tools fit within the EU regulatory framework — specifically the EU AI Act, DORA, and MiCAR? Second: what should compliance teams actually expect from these tools, and how should they evaluate them? See our full EU AI Act financial services guide for the wider regulatory context, and the MiCAR CASP compliance checklist for crypto-asset service provider obligations.

This guide answers both questions.


What Is Agentic AI? Why It's Different from Static AI

Traditional AI tools in compliance work produce outputs from a single query: a summarised document, a translated text, a classification label. They are stateless — each query is independent, and the system does not plan or execute multi-step tasks on its own.

Agentic AI systems are fundamentally different. They:

  • Plan — break a goal into subtasks and sequence them
  • Act autonomously — execute tool calls (web search, database query, document retrieval) without step-by-step human instruction
  • Adapt — modify their plan based on intermediate results
  • Maintain memory — carry context across steps within a session
  • Loop — retry failed steps, validate outputs against criteria, and continue until a goal is reached

In a compliance context, the distinction is material. A static AI answers "What does DORA Article 26 require for TLPT?" An agentic AI could be given the goal "Identify all DORA obligations we have not yet documented in our policy library, cross-reference against our current ICT third-party register, and flag gaps with article citations" — and execute that goal autonomously across multiple systems.

This capability is genuinely useful. It is also what creates the regulatory complexity.


How Agentic AI Is Being Deployed in EU Regulatory Compliance

Knowledge Retrieval and Regulatory Q&A

The most mature use case. AI systems query a regulatory knowledge base (primary legislation, ESMA guidelines, EBA RTS, NCA circulars) and return precise, cited answers. The question is whether the system is simply returning pre-indexed text or actively reasoning across sources.

Platforms in this category — including ours — retrieve and synthesise across 70,000+ document chunks covering EU financial regulation, supporting parallel queries across jurisdictions and regulatory frameworks.

Regulatory Change Monitoring

Agentic systems that continuously monitor EUR-Lex, ESMA, EBA, EIOPA, and NCA websites for new publications, classify changes by regulation and topic, and surface relevant updates to compliance teams without manual scanning. CUBE's core product automates regulatory change detection at scale across 180+ jurisdictions.

Obligation Mapping and Gap Analysis

More advanced: systems that receive a firm's existing policy library and a target regulation, then autonomously identify obligations in the regulation, match them against policy coverage, and produce a gap report. 4CRisk (now part of CUBE) was built specifically for agentic obligation mapping — breaking a regulation into individual obligations, mapping each to internal controls, and flagging gaps.

Compliance Workflow Orchestration

Emerging: agentic systems that coordinate multi-step compliance workflows — initiating a document review, routing it to the right analyst, checking completeness, triggering regulatory submissions. DORA360 by Gieom, with 100+ financial institution clients, is one example of this compliance workflow automation approach.


The EU AI Act: How Agentic Compliance Tools Are Classified

The EU AI Act (Regulation (EU) 2024/1689) does not use the word "agentic." But agentic AI systems clearly fall within the Act's definition of AI systems: software that generates outputs such as predictions, recommendations, decisions, or content that influence real or virtual environments, using machine learning, logic, or statistical approaches.

Risk Classification for Compliance Tools

Whether a specific agentic compliance tool is a high-risk AI system under the EU AI Act depends on what it does and what decisions it influences. For the full classification framework, see our primer on AI Act risk classification.

Likely NOT high-risk (Article 6 + Annex III):

  • Regulatory Q&A tools that assist human compliance officers in understanding regulations
  • Regulatory change monitoring systems that flag updates for human review
  • Document summarisation and analysis tools where humans make the final compliance determination

The EU AI Act's Annex III high-risk categories are exhaustive. Compliance intelligence tools that support human decision-making without autonomously making consequential decisions about individuals do not obviously fall into the listed categories (credit scoring, employment, access to essential services, etc.).

Potentially high-risk:

  • AI systems that autonomously determine whether a product is compliant and publish that determination to regulators or customers
  • AI systems used to approve or reject KYC/AML assessments
  • AI systems that generate regulatory filings without human review before submission

Key EU AI Act Obligations from August 2, 2026

For any AI system classified as high-risk, providers and deployers must comply from 2 August 2026:

  • Risk management system (Article 9) — ongoing identification, analysis, and mitigation of risks
  • Data governance (Article 10) — training, validation, and testing data must meet quality criteria
  • Technical documentation (Article 11) — comprehensive documentation of system design, capabilities, limitations
  • Logging (Article 12) — automatic logging sufficient for traceability; for agentic systems this is non-trivial given multi-step execution
  • Transparency to deployers (Article 13) — clear information about capabilities, limitations, accuracy levels, and appropriate use conditions
  • Human oversight (Article 14) — technical measures enabling effective human monitoring; agentic autonomy creates tension with this requirement
  • Accuracy, robustness, cybersecurity (Article 15)

The Human Oversight Challenge

Article 14 of the EU AI Act requires high-risk AI systems to be designed to enable human oversight — specifically, the ability for natural persons to understand the system's outputs and to decide not to use, override, or intervene.

This is where agentic AI creates structural tension. The defining feature of agentic AI is that it operates across multiple steps without constant human instruction. A system that executes 40 sub-tasks autonomously before producing an output provides less opportunity for meaningful mid-process oversight than a single-step query.

Practically, this pushes agentic compliance system designers toward:

  • Checkpoint architecture — human approval required before certain consequential actions
  • Explainable reasoning traces — logging each step with the reasoning that led to it
  • Confidence scoring — surfacing uncertainty rather than presenting outputs as definitive
  • Override mechanisms — UI that makes it easy to reject, modify, or escalate agentic outputs

Compliance teams evaluating agentic tools should assess all four of these design features.


DORA and Agentic AI: ICT Risk Management

DORA (Regulation (EU) 2022/2554) applies to all ICT systems used by in-scope financial entities — and AI-based compliance tools clearly qualify as ICT systems.

Under DORA, financial entities using agentic AI compliance tools must:

  • Include AI tools in the ICT third-party provider risk register (Article 28) — if the tool is provided by an external vendor, it must be documented in the ICT third-party register with contractual clauses on availability, data access, audit rights, and exit planning
  • Assess concentration risk — if multiple functions rely on the same agentic AI vendor, this creates ICT concentration risk; DORA Article 29 requires concentration risk assessment
  • Ensure contractual protections — DORA Article 30 mandates specific contractual clauses with ICT third-party providers, including: full description of services, SLAs with availability targets, location of data processing, incident reporting obligations to the financial entity, audit rights, and termination rights
  • Test resilience — agentic AI systems should be included in the DORA operational resilience testing programme; for significant entities, this means scenario testing that includes failure of critical ICT third parties

Practical implication: Before deploying an agentic AI compliance tool, in-scope financial entities should conduct DORA-compliant due diligence on the provider, ensure the contract meets Article 30 requirements, and include the system in the ICT third-party register.


MiCAR and AI-Driven Compliance for CASPs

For Crypto-Asset Service Providers (CASPs) under MiCAR, agentic AI creates both opportunities and obligations. See the MiCAR CASP NCA authorisation tracker for jurisdiction-by-jurisdiction authorisation status across Luxembourg, Germany, the Netherlands, and France.

Opportunities:

  • Automated monitoring of MiCAR Article 76 market abuse surveillance obligations (wash trading, front-running, market manipulation detection)
  • Continuous monitoring of whitepaper obligations versus actual asset behaviour
  • Regulatory change tracking across all seven EU MiCAR NCAs

Obligations:

  • CASPs using AI systems for compliance decisions must assess EU AI Act classification
  • AI systems used for AML/CFT screening or transaction monitoring may be high-risk depending on the decisions made
  • Any AI system used to generate or approve regulatory filings requires documented human oversight processes

What to Demand From Agentic Compliance Tools

When evaluating agentic AI tools for regulatory compliance work, ask these questions:

On accuracy and citation

  • Does the system cite primary sources (EU Official Journal, ESMA guidelines, EBA RTS) with article-level precision, or does it summarise without traceable sources?
  • What is the knowledge base coverage? How many document chunks, across which regulations and jurisdictions?
  • How is the knowledge base updated when regulations change? What is the lag between a new ESMA guideline being published and it being searchable in the system?

On agentic reliability

  • Can the system explain its reasoning at each step? Is there an audit log of what queries it ran and which sources it used?
  • What happens when the system is uncertain? Does it say so, or does it present uncertain answers with false confidence?
  • Is there a confidence or reliability score attached to outputs?

On EU AI Act compliance (as a deployer)

  • Has the provider documented how the system is classified under the EU AI Act?
  • If high-risk: what technical documentation, logging, and transparency information does the provider make available?
  • What human oversight mechanisms are built into the interface?

On DORA compliance (for in-scope financial entities)

  • Does the provider's contractual framework meet DORA Article 30 mandatory clause requirements?
  • Where is data processed? Does this raise data residency concerns?
  • What are the provider's SLAs on availability and incident notification?

Test agentic regulatory research with article-level citations across DORA, MiCAR, AIFMD II, EU AI Act, and more

Try free analysis

The Compliance Intelligence Stack: Where AI Fits

Agentic AI does not replace compliance teams. The current state of the technology means it is most reliable as a research accelerator — dramatically reducing the time to find, read, and synthesise regulatory text — rather than as an autonomous decision-maker that operates without human review.

The practical allocation of function:

FunctionAI-assistedHuman-led
Regulatory research and Q&APrimaryReview + escalation
Obligation identificationPrimaryValidation + judgment
Gap analysisPrimaryPrioritisation + remediation
Policy draftingFirst draftReview + approval
NCA filingsPreparation supportSign-off + submission
Client/investor communicationDraftReview + approval
Material regulatory judgment callsResearch supportDecision

The tools in this market that perform best deliver cited, article-precise answers for the research layer — letting compliance officers spend their time on the judgment layer, where human expertise and accountability cannot be automated away.


Frequently Asked Questions

Q: Does the EU AI Act apply to AI tools I buy from a vendor, or only to tools I build myself?

A: Both. The EU AI Act distinguishes between providers (who develop and place AI systems on the market) and deployers (who use AI systems in their operations). Financial entities that buy and deploy agentic compliance tools are deployers under the Act. Deployers of high-risk AI systems have their own obligations: conducting fundamental rights impact assessments in certain cases, ensuring human oversight, implementing data governance for systems that process personal data, maintaining use logs, and providing transparency to affected individuals. If you deploy a high-risk AI system, you cannot simply point to the vendor's obligations — you have your own.

Q: Is a compliance chatbot or regulatory Q&A tool high-risk under the EU AI Act?

A: Most compliance Q&A tools are not high-risk under the EU AI Act's Annex III. The high-risk categories are specific: AI systems used in credit scoring, employment decisions, access to essential services, law enforcement, administration of justice, and a small number of other areas. A tool that helps a compliance officer research DORA obligations or draft a gap analysis is unlikely to be classified as high-risk — it is an AI system used to support a professional's research, not to make consequential decisions about individuals. However, if an AI system is used to make autonomous AML/KYC determinations about customers, or to approve or reject client onboarding, the analysis changes significantly.

Q: How is agentic AI different from a compliance workflow tool?

A: Traditional compliance workflow tools (like GRC platforms) automate processes by routing tasks between predefined steps with predefined rules. They execute workflows that humans have designed. Agentic AI systems can design and execute their own workflows based on a goal. Asked to "identify all MiCAR obligations applicable to a CASP conducting portfolio management services and map them to our control framework," an agentic system will plan the research, execute it across multiple sources, synthesise the output, and present a result — without a human specifying the intermediate steps. This flexibility is the value proposition and the source of the EU AI Act's human oversight challenge.

Q: CUBE acquired 4CRisk for agentic AI. Should compliance teams be concerned about concentration risk?

A: Yes — but not uniquely because of agentic AI. Any compliance team that relies heavily on a single RegTech vendor for regulatory intelligence creates ICT concentration risk under DORA. CUBE's acquisition of 4CRisk accelerates their agentic capability, but the concentration risk question is the same as for any critical ICT vendor: what happens to your compliance workflow if the service is unavailable? Financial entities subject to DORA should assess their regulatory intelligence tools as part of their ICT third-party risk framework, regardless of vendor.

Need to check your DORA compliance?

Try free analysis →
FR

FinancialRegulations.EU Team

Regulatory Intelligence

Expert analysis of EU financial regulation — covering MiCAR, DORA, AIFMD, SFDR, and 15+ regulatory frameworks across 7 jurisdictions.

Query DORA obligations instantly

AI-powered analysis of EU financial regulations. No credit card required.

Start Free →

Get EU regulatory insights in your inbox

Weekly updates on MiCAR, DORA, SFDR and more. Unsubscribe anytime.

Related Articles