Understanding Compliance Obligations
The EU AI Act establishes a complex web of obligations that vary based on two factors: the risk classification of the AI system and the role of the organization in the AI value chain. This part provides a practical roadmap for achieving and maintaining compliance.
💡 Compliance Philosophy
The AI Act takes a "lifecycle approach" to compliance. Obligations are not one-time requirements but ongoing responsibilities that must be maintained throughout the entire operational life of an AI system, from design through deployment to decommissioning.
Roles in the AI Value Chain
The EU AI Act defines specific roles with distinct obligations:
Develops or has developed an AI system and places it on the market or puts it into service under their own name or trademark.
- Ensure compliance with requirements
- Implement quality management system
- Draw up technical documentation
- Conduct conformity assessment
- Register in EU database
- Apply CE marking
- Post-market monitoring
Uses an AI system under their authority (except for personal non-professional activity).
- Implement technical/organizational measures
- Assign human oversight
- Monitor operation
- Keep logs (when under control)
- Conduct FRIA where required
- Inform workers/representatives
- Comply with transparency obligations
Places on the EU market an AI system from a third country.
- Verify conformity assessment completed
- Verify technical documentation exists
- Verify CE marking and EU declaration
- Ensure provider has appointed EU representative
- Indicate own contact details
- Maintain documentation for 10 years
Makes an AI system available on the market (other than provider or importer).
- Verify CE marking is affixed
- Verify required documentation accompanies system
- Verify provider/importer compliance
- Not make available non-compliant systems
- Inform provider/importer of non-compliance
⚠ Role Transformation
An importer, distributor, or deployer becomes a "provider" and assumes provider obligations when they: (1) put their name/trademark on a high-risk AI system already on the market, (2) make a substantial modification to a high-risk system, or (3) modify the intended purpose of an AI system making it high-risk.
Provider Obligations for High-Risk AI Systems
Quality Management System (Article 17)
Providers must establish, implement, document, and maintain a quality management system covering:
- Strategy for regulatory compliance
- Techniques, procedures, and systematic actions for design, development, and examination
- Examination, test, and validation procedures before, during, and after development
- Technical specifications including standards to be applied
- Systems and procedures for data management
- Risk management system
- Post-market monitoring system
- Procedures related to serious incident reporting
- Communication with national competent authorities and bodies
- Systems for record-keeping
- Resource management including supply chain
- Accountability framework
Technical Documentation (Article 11)
Technical documentation must contain:
| Category |
Required Content |
| General Description |
Intended purpose; provider identity; version; how system interacts with hardware/software; forms of input data; instructions for use |
| System Description |
General logic; key design choices; main classification choices; system optimization; expected output; computational resources; development lifecycle description |
| Monitoring & Testing |
Validation and testing procedures; metrics used; test logs; cybersecurity measures |
| Risk Management |
Risk management procedures; foreseeable unintended outcomes; human oversight measures; preliminary FRIA |
| Changes Log |
Changes made throughout lifecycle; previous versions; update mechanisms |
| Standards Applied |
Harmonized standards applied; common specifications; other standards and technical specifications |
Conformity Assessment Procedures
High-risk AI systems must undergo conformity assessment before market placement. The applicable procedure depends on the system type:
1
Identify Applicable Procedure
3
Internal/Third-Party Assessment
Internal Control (Annex VI)
Most high-risk AI systems can use internal conformity assessment (self-assessment):
- Provider verifies quality management system
- Provider examines technical documentation
- Provider verifies system complies with requirements
- Provider draws up EU declaration of conformity
- No notified body involvement required
Third-Party Assessment
Notified body involvement required for:
- Biometric identification systems (remote)
- Critical infrastructure AI
- AI systems already subject to third-party assessment under product safety legislation
✓ Conformity Presumption
High-risk AI systems that are in conformity with harmonized standards (or parts thereof) shall be presumed to be in conformity with the requirements covered by those standards. This provides a practical pathway to demonstrate compliance.
Deployer Obligations
General Obligations (Article 26)
- Use in Accordance with Instructions: Take appropriate technical and organizational measures to use the system according to the provider's instructions for use
- Human Oversight: Assign human oversight to natural persons with necessary competence, training, and authority
- Input Data Relevance: Ensure input data is relevant and sufficiently representative for the intended purpose
- Monitoring: Monitor operation based on instructions for use and inform provider of risks
- Log Retention: Keep logs automatically generated (when under their control) for minimum 6 months
Fundamental Rights Impact Assessment (FRIA)
Certain deployers must conduct a FRIA before putting a high-risk AI system into use:
💡 Who Must Conduct FRIA?
- Deployers that are bodies governed by public law
- Private entities providing public services
- Deployers using systems for creditworthiness evaluation
- Deployers using systems for risk assessment and pricing for life/health insurance
The FRIA must include:
- Description of the deployer's processes where the AI system will be used
- Period and frequency of intended use
- Categories of natural persons and groups likely to be affected
- Specific risks of harm likely to impact identified categories
- Human oversight measures
- Measures to be taken if risks materialize
Transparency Obligations for Deployers
- Inform natural persons that they are subject to use of high-risk AI system (unless obvious from context)
- For emotion recognition or biometric categorization: inform persons exposed before processing
- Inform employee representatives and affected employees about AI use in workplace
Implementation Timeline
The EU AI Act has a phased implementation approach:
August 1, 2024
Entry into Force
The AI Act officially enters into force. The clock starts on all transition periods.
February 2, 2025
Prohibited Practices Apply
Chapter II prohibitions on unacceptable risk AI practices become enforceable. Organizations must cease any prohibited AI activities.
August 2, 2025
GPAI & Governance
Rules on General Purpose AI (GPAI) models apply. National competent authorities must be designated. Notified body provisions apply. Penalties applicable.
August 2, 2026
Full Application
Most provisions fully applicable. High-risk AI system requirements enforceable. All operator obligations in effect.
August 2, 2027
Extended Deadline
Extended deadline for high-risk AI systems that are safety components of products covered by certain EU harmonization legislation (Annex I, Section B).
⚠ Transition Period
High-risk AI systems already placed on the market before August 2, 2026, can continue operating if they do not undergo significant modifications. However, they must comply when substantially modified or when re-placed on the market.