Key Takeaway
Starting with a thorough risk classification of all AI systems in your portfolio lets you focus engineering resources on high-risk systems that require the most compliance work. This roadmap breaks down the EU AI Act into actionable engineering tasks across three phases: Assessment, Implementation, and Certification, with specific timelines tied to the Act's enforcement milestones.
Prerequisites
- An inventory of all AI systems deployed or planned across the organization
- Understanding of which AI systems are placed on the EU market or affect EU residents
- Access to legal counsel with EU AI Act expertise
- Familiarity with your organization's existing quality management and risk management systems
- Executive sponsorship and budget allocation for compliance activities
- Completed or in-progress AI governance framework (see: AI Governance Framework)
Enforcement Timeline
The EU AI Act entered into force on August 1, 2024, with a staggered enforcement timeline. Understanding this timeline is critical for prioritizing compliance work. Organizations that wait until obligations become enforceable will not have enough time to implement the required controls, documentation, and organizational changes.
- 1
February 2, 2025: Prohibited Practices
Prohibitions on unacceptable-risk AI systems take effect. This includes social scoring, real-time remote biometric identification in public spaces (with limited exceptions), exploitation of vulnerabilities, and subliminal manipulation. Audit your portfolio immediately for any systems that fall into these categories.
- 2
August 2, 2025: GPAI Model Obligations
Obligations for general-purpose AI model providers take effect. If your organization provides foundation models or general-purpose AI systems to others, you must comply with transparency requirements, copyright compliance, and (for systemic risk models) adversarial testing and incident reporting.
- 3
August 2, 2026: High-Risk System Obligations
The core of the Act takes effect. High-risk AI systems must comply with requirements for risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy, robustness, and cybersecurity. This is the most resource-intensive compliance milestone.
- 4
August 2, 2027: Extended Deadlines
Extended compliance deadline for high-risk AI systems that are components of larger regulated products (e.g., medical devices, automotive, aviation). These systems may need to comply with both the AI Act and sector-specific regulations.
Risk Classification System
The Act classifies AI systems into four risk tiers, each with different compliance obligations. The classification depends on the system's intended purpose and the sector in which it operates, not on the underlying technology. A simple logistic regression model used for credit scoring is high-risk, while a sophisticated LLM used for internal code review is likely limited or minimal risk.
| Risk Tier | Examples | Key Obligations | Penalties for Non-Compliance |
|---|---|---|---|
| Unacceptable Risk (Prohibited) | Social scoring by governments, real-time biometric ID in public (with exceptions), manipulation of vulnerable groups | Complete prohibition. Must be removed from market and decommissioned. | Up to 35M EUR or 7% of global annual turnover |
| High Risk | Credit scoring, recruitment tools, medical diagnostics, law enforcement, critical infrastructure management | Risk management system, data governance, technical documentation, logging, human oversight, accuracy/robustness requirements, conformity assessment | Up to 15M EUR or 3% of global annual turnover |
| Limited Risk | Chatbots, emotion recognition, deepfake generation, AI-generated content | Transparency obligations: users must be informed they are interacting with AI or viewing AI-generated content | Up to 7.5M EUR or 1.5% of global annual turnover |
| Minimal Risk | Spam filters, inventory management, AI-powered search, content recommendations | No mandatory requirements (voluntary codes of conduct encouraged) | N/A (but misclassification penalties apply) |
Risk classification is not a self-certification exercise. If your system is challenged by a regulator and found to be misclassified (e.g., a system you classified as limited risk that should have been high risk), penalties apply retroactively. Document your classification rationale thoroughly and have it reviewed by legal counsel.
Phase 1: Assessment (Months 1-3)
The assessment phase establishes your compliance baseline. It produces three deliverables: a complete AI system inventory with risk classifications, a gap analysis comparing current controls against Act requirements, and a resource plan for closing identified gaps. This phase is primarily an analytical exercise, but its output determines the scope and cost of the implementation phase.
Begin by cataloging every AI system in your organization: production systems, systems in development, third-party AI services you integrate, and AI components embedded in other products. For each system, document its intended purpose, the sectors and user populations it serves, the data it processes, and the decisions it influences. Then classify each system using the Act's risk tier definitions, paying careful attention to the Annex III list of high-risk use cases.
Phase 2: Implementation (Months 4-9)
The implementation phase builds the technical controls, documentation, and organizational processes required for compliance. For high-risk systems, this is substantial work. The Act requires a documented risk management system, data governance practices, technical documentation (the 'technical file'), automatic logging of system operation, transparency and provision of information to users, human oversight mechanisms, and demonstrated accuracy, robustness, and cybersecurity.
/**
* EU AI Act compliance tracking for high-risk AI systems.
* Maps Act requirements to implementation status.
*/
type ComplianceStatus =
| "not-started"
| "in-progress"
| "implemented"
| "verified"
| "non-applicable";
interface ComplianceRequirement {
articleRef: string; // e.g., "Art. 9" for Risk Management
title: string;
description: string;
status: ComplianceStatus;
owner: string;
evidence: string[]; // Links to documentation
deadline: string; // ISO date
notes: string;
}
const HIGH_RISK_REQUIREMENTS: Omit<
ComplianceRequirement,
"status" | "owner" | "evidence" | "deadline" | "notes"
>[] = [
{
articleRef: "Art. 9",
title: "Risk Management System",
description:
"Establish and maintain a risk management system "
+ "throughout the AI system lifecycle. Identify and "
+ "analyze known and foreseeable risks. Adopt risk "
+ "management measures. Test for residual risk.",
},
{
articleRef: "Art. 10",
title: "Data Governance",
description:
"Training, validation and testing datasets must be "
+ "relevant, representative, free of errors, and "
+ "complete. Data governance and management practices "
+ "must address collection, labeling, and preparation.",
},
{
articleRef: "Art. 11",
title: "Technical Documentation",
description:
"Prepare technical documentation demonstrating "
+ "compliance before the system is placed on the "
+ "market. Include system description, design "
+ "specifications, development process, and "
+ "monitoring measures.",
},
{
articleRef: "Art. 12",
title: "Record-Keeping (Logging)",
description:
"Design the system to automatically log events "
+ "during operation. Logs must enable monitoring "
+ "and traceability appropriate to the system's "
+ "intended purpose.",
},
{
articleRef: "Art. 13",
title: "Transparency",
description:
"Design for sufficient transparency to enable "
+ "deployers to interpret and use outputs "
+ "appropriately. Provide instructions for use "
+ "with relevant information.",
},
{
articleRef: "Art. 14",
title: "Human Oversight",
description:
"Design to be effectively overseen by natural "
+ "persons. Enable human understanding, intervention, "
+ "and override capabilities.",
},
{
articleRef: "Art. 15",
title: "Accuracy, Robustness, Cybersecurity",
description:
"Achieve appropriate levels of accuracy, robustness "
+ "and cybersecurity. Implement measures to address "
+ "errors, faults, and inconsistencies.",
},
];Phase 3: Certification (Months 10-12)
The certification phase prepares your organization for conformity assessment. For most high-risk AI systems, conformity assessment can be performed internally (self-assessment against harmonized standards). However, certain categories -- notably remote biometric identification -- require assessment by a notified body (an independent third-party assessor). The certification phase includes preparing the conformity assessment documentation, conducting internal audits against the technical file, engaging a notified body if required, and completing the EU Declaration of Conformity and CE marking.
The technical file is the central artifact of conformity assessment. It must contain a general description of the system, detailed documentation of the system design, a description of the risk management system, data governance documentation, technical documentation per Article 11, instructions for use, evidence of accuracy and robustness testing, and a description of the post-market monitoring system. Plan for this documentation to take several weeks to compile even if the underlying controls are already in place.
Phase 1: Assessment
Phase 2: Implementation
Phase 3: Certification
Version History
1.0.0 · 2026-03-01
- • Initial release with three-phase compliance roadmap aligned to EU AI Act enforcement timeline
- • Risk classification comparison table with penalty levels
- • TypeScript compliance tracker with Article-by-Article requirements mapping
- • Enforcement timeline with key milestone dates through August 2027
- • Production checklist with 13 items across assessment, implementation, and certification phases