The High-Risk Countdown
August 2, 2026 stands as perhaps the most consequential deadline in the EU AI Act's implementation timeline. On this date, the full obligations for high-risk AI systems take effect, requiring conformity assessments, CE marking, EU database registration, and comprehensive technical documentation. While the European Commission has proposed a one-year delay for certain systems, organizations should prepare as if the original deadline stands.
The four-tier risk structure introduced by the AI Act — unacceptable (banned since February 2025), high-risk, limited-risk, and minimal-risk — demands that every AI system be classified and its obligations mapped. For high-risk systems, these obligations are extensive and technically demanding.
What Makes an AI System High-Risk?
The EU AI Act defines high-risk AI systems through two primary channels. First, AI systems that serve as safety components of products already subject to EU harmonized legislation (such as medical devices, machinery, or toys) automatically inherit high-risk classification. Second, Annex III of the Act lists specific use cases deemed high-risk, including biometric identification, critical infrastructure management, educational access, employment decisions, credit scoring, law enforcement, and immigration management.
The Commission is developing detailed guidelines on high-risk classification, with final guidance expected by February 2, 2026. However, waiting for these guidelines before beginning classification is a risky strategy — the fundamental criteria are already defined in the legislation itself.
Conformity Assessment Requirements
For high-risk AI systems, conformity assessments represent the most resource-intensive compliance obligation. These assessments must demonstrate that the system meets requirements across six dimensions: risk management, data governance, technical documentation, record-keeping, transparency, and human oversight.
The risk management system must be a continuous iterative process running throughout the AI system's lifecycle — not a one-time assessment. It must identify and analyze known and foreseeable risks, estimate and evaluate risks that may emerge during use, and adopt appropriate risk management measures based on these evaluations.
Codes of Practice and Technical Standards
The EU has been developing codes of practice for AI providers, with the first drafts finalized by May 2, 2025. These codes, while not legally binding, provide a practical roadmap for meeting the Act's substantive requirements and are expected to carry significant weight in enforcement decisions.
The AI Office and AI Board, both operational since August 2025, play central roles in interpreting and applying these standards. Organizations should monitor their guidance closely, as it provides the most authoritative interpretation of compliance expectations available before formal enforcement actions establish precedent.
Pre-2026 Classification Imperative
Regardless of whether the proposed delay to 2027 is approved, organizations should begin classification and preparation now. The conformity assessment process for high-risk systems is not something that can be completed in weeks — it requires systematic documentation, technical testing, and organizational processes that take months to develop and implement.
Arcus provides automated risk classification that maps your AI systems against the Act's criteria, identifies applicable obligations, and generates the technical documentation and risk management frameworks required for conformity assessment. Start with our free AI system assessment to understand where your organization stands.