EU AI Act: What Financial Services Firms Need to Know Before August 2026
The EU Artificial Intelligence Act — Regulation (EU) 2024/1689 — is the world's first comprehensive legal framework for AI. It entered into force on 1 August 2024, and its most consequential provisions for financial services take effect on 2 August 2026: the obligations for high-risk AI systems. For banks, insurers, investment firms, and fintechs deploying AI in credit scoring, fraud detection, AML screening, insurance pricing, or investment recommendations, this is a new compliance layer that operates alongside — and sometimes intersects with — DORA, MiFID II, SFDR, and Solvency II.
This guide covers the AI Act's structure, its specific impact on financial services, the high-risk obligations that apply from August 2026, and the intersection with existing financial regulation.
The AI Act's Risk-Based Architecture
The AI Act classifies AI systems into four risk categories, each carrying different obligations:
Unacceptable Risk (Prohibited — Article 5)
Certain AI practices are outright banned. These prohibitions applied from 2 February 2025 and include:
- Social scoring by public authorities — and by private entities where it leads to detrimental treatment unrelated to the context in which data was generated
- Real-time remote biometric identification in publicly accessible spaces for law enforcement (with narrow exceptions)
- Emotion recognition in the workplace and educational institutions
- Manipulative AI that exploits vulnerabilities of specific groups (age, disability, social or economic situation)
- Untargeted scraping of facial images from the internet or CCTV to build facial recognition databases
For financial institutions, the social scoring prohibition has direct relevance: AI systems that evaluate natural persons based on social behaviour or personal characteristics in ways leading to unfavourable treatment that is unjustified or disproportionate are prohibited. Credit scoring systems must be carefully designed to avoid crossing this line.
High Risk (Regulated — Articles 6-49)
This is where financial services is most affected. AI systems classified as high-risk must comply with extensive requirements before being placed on the market or put into service. These obligations apply from 2 August 2026.
Limited Risk (Transparency — Article 50)
AI systems that interact with natural persons (chatbots), generate synthetic content (deepfakes), or perform emotion recognition must disclose that fact. These transparency obligations applied from 2 August 2025.
Minimal Risk
AI systems not falling into the above categories are unregulated under the AI Act, though voluntary codes of conduct are encouraged.
Why Financial Services Is High-Risk
Annex III of the AI Act lists the domains where AI systems are classified as high-risk under Article 6(2). Point 5(b) of Annex III explicitly covers financial services:
AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud.
Additionally, point 5(a) covers:
AI systems intended to be used for making decisions on access to or conditions of insurance products, including the risk assessment and pricing in relation to natural persons in the case of life and health insurance.
Beyond Annex III's explicit mentions, several other high-risk categories intersect with financial services:
- Point 3(a): AI systems used in education or vocational training that determine access — relevant for firms using AI in recruitment or professional certification
- Point 4(a): AI systems used for recruitment, CV screening, interview evaluation — banks and asset managers using AI hiring tools
- Point 4(b): AI systems for task allocation, performance monitoring, or termination decisions — workforce management AI in financial institutions
- Point 8(a): AI systems used by public authorities for eligibility assessment — relevant where financial institutions administer public programmes
Financial AI Use Cases That Are High-Risk
| Use Case | Annex III Category | High-Risk? | |---|---|---| | Credit scoring / creditworthiness assessment | 5(b) | Yes | | Insurance risk assessment and pricing (life/health) | 5(a) | Yes | | AML/CFT transaction monitoring | Not explicitly listed | See below | | Fraud detection | 5(b) exception | Excluded | | Robo-advisory / investment recommendations | Not explicitly listed | See below | | KYC/customer onboarding with biometric ID | Annex III point 1 | Potentially | | AI-driven recruitment at financial firms | 4(a) | Yes | | Algorithmic trading | Not listed | No (but MiFID II applies) |
The AML screening question is nuanced. The AI Act does not explicitly list AML transaction monitoring in Annex III. However, where AML systems make decisions that affect natural persons' access to financial services (e.g., automatic de-risking, account closure), they may be caught by the general provision in Annex III point 5(b) or by the broader fundamental rights impact assessment under Article 27. The European Commission and ESAs have not yet published definitive guidance on this boundary. Firms should treat material AML automation as potentially high-risk until clarified.
Fraud detection is explicitly excluded from the high-risk classification under point 5(b) of Annex III. The legislature recognised that fraud detection AI operates to protect users rather than making adverse decisions about them. However, where fraud detection systems lead to automatic account freezing or transaction blocking affecting individuals, firms should assess whether those downstream decisions cross back into high-risk territory.
High-Risk AI System Obligations (August 2026)
From 2 August 2026, deployers and providers of high-risk AI systems must comply with the following requirements:
1. Risk Management System (Article 9)
A continuous, iterative risk management system must be established, documented, and maintained throughout the AI system's lifecycle. This includes:
- Identification and analysis of known and reasonably foreseeable risks
- Estimation and evaluation of risks that may emerge when the system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse
- Adoption of appropriate risk management measures
- Testing to identify the most appropriate risk management measures
For financial institutions, this intersects directly with DORA's ICT risk management framework (Articles 5-16). Firms will need to decide whether to integrate AI risk management into their DORA framework or maintain separate processes. The former is more efficient; the latter may be required where the risk profiles are sufficiently distinct.
2. Data Governance (Article 10)
Training, validation, and testing datasets must meet quality criteria:
- Relevant, sufficiently representative, and as free of errors as possible
- Appropriate statistical properties with regard to the persons or groups on which the system is intended to be used
- Examination for possible biases, especially where outputs influence decisions concerning natural persons
For credit scoring AI, this means documenting the representativeness of training data across demographic groups, testing for discriminatory outcomes, and maintaining data quality records. This is a significant operational undertaking for firms using legacy datasets.
3. Technical Documentation (Article 11)
Comprehensive technical documentation must be drawn up before the system is placed on the market or put into service, and kept up to date. This documentation must demonstrate compliance with all high-risk requirements and provide supervisory authorities with the information needed to assess compliance.
The documentation requirements are specified in Annex IV and include: a general description of the AI system, detailed description of elements and development process, monitoring and functioning information, risk management documentation, and a description of changes made throughout the lifecycle.
4. Record-Keeping and Logging (Article 12)
High-risk AI systems must be designed to automatically record events (logs) throughout their operational lifetime. Logs must enable:
- Traceability of the AI system's functioning
- Monitoring of the operation
- Post-market surveillance
Logging capabilities must be proportionate to the intended purpose and must, at minimum, record the period of use, the reference database against which input data was checked, input data for which the search led to a match, and identification of natural persons involved in the verification of results.
5. Transparency and Information to Deployers (Article 13)
High-risk AI systems must be designed to be sufficiently transparent to enable deployers to interpret the system's output and use it appropriately. This includes:
- Clear information on the provider's identity and contact details
- The system's characteristics, capabilities, and limitations of performance
- Changes to the system and its performance pre-determined by the provider at the time of initial conformity assessment
- Human oversight measures
- Expected lifetime and necessary maintenance
For banks using third-party AI credit scoring models, this means the model provider must deliver sufficient documentation for the bank — as deployer — to understand what the model does, how it works, and what its limitations are. Black-box models will face acute compliance challenges.
6. Human Oversight (Article 14)
High-risk AI systems must be designed to allow effective oversight by natural persons during the period of use. Human oversight measures must:
- Enable the person to fully understand the capacities and limitations of the system
- Enable appropriate monitoring of the system's operation to detect anomalies, dysfunctions, and unexpected performance
- Enable the person to decide not to use the system, or to disregard, override, or reverse the output
- Enable the person to intervene or interrupt the system through a "stop" mechanism
This is particularly significant for automated credit decisions. The AI Act's human oversight requirements go beyond existing consumer credit law (CCD/MCD) "meaningful human involvement" standards and require that the human overseer can genuinely override the system — not merely rubber-stamp its outputs.
7. Accuracy, Robustness, and Cybersecurity (Article 15)
High-risk AI systems must achieve appropriate levels of accuracy, robustness, and cybersecurity. Accuracy metrics must be declared and made available to deployers. Robustness requirements include resilience to errors, faults, and attempts at manipulation by malicious third parties.
The cybersecurity dimension creates a direct bridge to DORA: financial entities' AI systems must meet both the AI Act's robustness/cybersecurity standards and DORA's ICT risk management requirements. This is not a contradiction — DORA's framework is more operationally detailed, while the AI Act adds AI-specific robustness requirements (adversarial attacks, data poisoning, model manipulation).
The Provider vs Deployer Framework
The AI Act distinguishes between providers (entities that develop or commission AI systems) and deployers (entities that use AI systems under their authority). Financial institutions typically are deployers of AI systems provided by third-party vendors, but may also be providers where they develop proprietary AI models.
If You Are a Deployer (Most Financial Institutions)
Article 26 imposes obligations on deployers of high-risk AI systems:
- Use the system in accordance with the provider's instructions of use
- Ensure that input data is relevant and sufficiently representative
- Monitor the operation based on the provider's instructions
- Keep logs automatically generated by the system for at least six months (or longer if required by sector-specific law)
- Perform a fundamental rights impact assessment (Article 27) before deployment, where the deployer is a financial institution or insurance undertaking — this is mandatory, not voluntary
- Inform natural persons that they are subject to the use of a high-risk AI system (where decisions affect them)
- Provide meaningful explanations of decisions taken on the basis of high-risk AI output, where those decisions produce legal effects or similarly significant effects on individuals
If You Are a Provider (Developing Proprietary AI)
The obligations are more extensive: full compliance with Articles 8-15, conformity assessment (Article 43), EU declaration of conformity (Article 47), CE marking (Article 48), registration in the EU database (Article 49), post-market monitoring (Article 72), and serious incident reporting (Article 73).
Important: Under Article 25(1), a deployer that puts its name or trademark on a high-risk AI system, substantially modifies it, or changes its intended purpose becomes a provider and assumes all provider obligations. Financial institutions that customise or fine-tune third-party AI models should assess whether they have crossed the provider threshold.
Intersection with Existing Financial Regulation
AI Act + DORA
DORA governs ICT risk management, incident reporting, and third-party ICT oversight for financial entities. The AI Act adds AI-specific requirements on top. Key intersections:
| Domain | DORA | AI Act | Resolution | |---|---|---|---| | Risk management | ICT risk management framework (Arts 5-16) | AI risk management system (Art 9) | Integrate AI risk into DORA framework; add AI-specific risk assessments | | Incident reporting | Major ICT incident reporting (Arts 17-23) | Serious incident reporting (Art 73) | Map AI-specific incidents to DORA classification criteria; report to both NCA and AI Act market surveillance authority if applicable | | Third-party risk | ICT third-party risk management (Arts 28-44) | Provider obligations flow down to deployer (Art 26) | Extend DORA vendor due diligence to include AI Act compliance evidence from AI providers | | Testing | Digital operational resilience testing (Arts 24-27) | Accuracy and robustness testing (Art 15) | Add AI-specific test scenarios (adversarial attacks, bias testing) to DORA testing programme | | Cybersecurity | ICT security requirements throughout DORA | Cybersecurity requirements (Art 15) | DORA framework satisfies AI Act cybersecurity requirements in most cases |
AI Act + MiFID II
MiFID II already imposes requirements on algorithmic trading (Article 17) and product governance (Articles 16(3), 24). The AI Act adds:
- Where AI is used in suitability assessments (Article 25 MiFID II), the transparency and human oversight requirements of the AI Act apply on top
- AI-driven order execution systems must meet both MiFID II best execution requirements and AI Act robustness standards
- Product governance processes using AI for target market assessment must comply with AI Act data governance requirements
AI Act + SFDR / Taxonomy Regulation
Where AI is used to produce sustainability metrics, ESG scores, or taxonomy alignment assessments:
- The AI system's data governance requirements (Article 10) intersect with SFDR's data quality expectations for PAI disclosures
- Transparency requirements (Article 13) support SFDR's broader disclosure regime
- Firms should document how AI-derived ESG metrics are produced and what their limitations are
AI Act + Consumer Credit Directive / Mortgage Credit Directive
Automated creditworthiness assessments under the Consumer Credit Directive (Directive 2008/48/EC) and Mortgage Credit Directive (Directive 2014/17/EU) already require meaningful human involvement. The AI Act's human oversight requirements (Article 14) and the right to meaningful explanation of AI-based decisions add specific technical and process requirements on top of existing consumer credit law.
Timeline: What Financial Institutions Must Do and When
| Date | Milestone | Financial Services Impact | |---|---|---| | 1 August 2024 | AI Act enters into force | Start planning; establish AI governance team | | 2 February 2025 | Prohibited AI practices apply; AI literacy obligation (Article 4) | Review AI portfolio against Article 5 prohibitions; ensure staff AI literacy programmes | | 2 August 2025 | GPAI model obligations (Chapter V); transparency obligations for limited-risk AI (Article 50) | Chatbots, synthetic content generators must disclose AI nature; GPAI providers must comply | | 2 February 2026 | Member States designate competent authorities and notified bodies | Identify your national AI Act authority (may be NCA, may be separate body) | | 2 August 2026 | High-risk AI system obligations apply | Full compliance with Articles 6-49 for all high-risk AI systems in Annex III | | 2 August 2027 | Obligations for high-risk AI systems that are also safety components of products in Annex I | Limited additional impact for pure financial services firms |
Preparation Checklist for August 2026
Immediate (Q1-Q2 2026):
- AI inventory: Map all AI systems in use across the organisation. Classify each as prohibited, high-risk, limited-risk, or minimal-risk under the AI Act framework
- Gap assessment: For each high-risk system, assess current compliance against Articles 8-15 requirements
- Provider engagement: For third-party AI systems, engage vendors on their AI Act compliance roadmap — request technical documentation (Annex IV), conformity declarations, and CE marking timelines
- Fundamental rights impact assessment: For credit scoring, insurance pricing, and other Annex III point 5 systems, begin the Article 27 FRIA process
Pre-deadline (Q2-Q3 2026):
- Risk management integration: Integrate AI risk management into existing DORA ICT risk framework or establish parallel AI risk management system
- Data governance: Document training data provenance, representativeness, and bias testing for all high-risk AI systems
- Human oversight: Ensure genuine human oversight capability for all high-risk AI decision-making processes — not just a theoretical override, but operationally effective intervention
- Logging and monitoring: Verify that high-risk AI systems generate compliant automatic logs and that log retention meets the six-month minimum
- Transparency documentation: Ensure deployer-facing documentation from AI providers meets Article 13 requirements
Ongoing:
- Post-market monitoring: Establish continuous monitoring processes for high-risk AI system performance, accuracy drift, and bias emergence
- Incident procedures: Define what constitutes a "serious incident" under Article 73 and integrate reporting procedures with DORA incident reporting
- Governance: Assign clear accountability for AI Act compliance — this may sit with the same governance function responsible for DORA, or with a dedicated AI governance role
Enforcement and Penalties
The AI Act establishes a tiered penalty structure:
- Prohibited AI systems: Up to EUR 35 million or 7% of total worldwide annual turnover (whichever is higher)
- High-risk AI system non-compliance: Up to EUR 15 million or 3% of total worldwide annual turnover
- Incorrect information to authorities: Up to EUR 7.5 million or 1% of total worldwide annual turnover
For financial institutions already subject to MiFID II, CRD, DORA, and sectoral penalty regimes, the AI Act adds another potential enforcement layer. National competent authorities designated under Article 70 will be responsible for market surveillance and enforcement — which national authority this is varies by Member State and may not be the financial supervisor.
What This Means in Practice
The EU AI Act does not exist in a regulatory vacuum. For financial institutions, it adds a new compliance dimension to AI systems that are already subject to sectoral regulation. The practical challenge is integration: building AI governance that satisfies the AI Act's requirements without duplicating what DORA, MiFID II, and existing supervisory expectations already demand.
The firms that will navigate this most efficiently are those that:
- Start with an inventory. You cannot comply with the AI Act if you do not know what AI systems you have. Shadow AI — models deployed by business units without central oversight — is the biggest compliance risk.
- Treat the AI Act as an extension of DORA, not a separate regime. The risk management, testing, incident reporting, and third-party oversight requirements overlap significantly. Build one integrated framework.
- Engage providers now. If you are a deployer, your compliance depends heavily on your AI providers. Providers that cannot deliver Annex IV technical documentation or demonstrate conformity assessment by August 2026 put your compliance at risk.
- Do not underestimate the fundamental rights impact assessment. Article 27 requires financial institutions to assess the impact of high-risk AI on fundamental rights before deployment. This is a new requirement with no direct precedent in financial regulation, and the methodology is still being developed. Starting early is essential.
The August 2026 deadline is less than five months away. The time to act is now.
Query AI-Act obligations instantly
AI-powered analysis of EU financial regulations. No credit card required.
Start Free →Related Articles
EU Financial Regulation Compliance Calendar 2026: Every Deadline Your Team Must Know
Complete EU financial regulation compliance calendar for 2026. AIFMD II (16 April), MiCAR CASP authorisation (1 July), AML Package (10 July), AI Act high-risk (2 August), ESG Ratings Regulation (2 July), and more.
DORA Compliance for Fund Managers: The ICT Risk Framework You Can’t Ignore
DORA applies since January 2025. Fund managers are in scope. Here’s what you need to know about the 5 pillars.
MiCAR CASP Authorization: Complete Compliance Checklist for 2026
The MiCAR CASP transitional period expires 1 July 2026. This article-by-article compliance checklist covers all authorisation requirements: capital thresholds (Article 67), governance (Article 68), client asset segregation (Article 70), AML/CFT obligations, DORA alignment, and the full application documentation list under Article 62.