The GPAI Documentation Mandate
Since August 2, 2025, providers of general-purpose AI (GPAI) models have faced a new reality: mandatory documentation and transparency obligations under the EU AI Act. While full enforcement powers for GPAI-specific violations won't activate until August 2, 2026, the substantive requirements are already in effect, and organizations should be actively building their compliance documentation.
The GPAI provisions represent the EU's recognition that foundation models and general-purpose AI systems present unique governance challenges. Unlike purpose-built AI systems that can be evaluated against specific use-case risks, GPAI models may be deployed across countless applications, each carrying different risk profiles.
Technical Documentation Requirements
GPAI providers must prepare comprehensive technical documentation covering model architecture, training methodology, evaluation results, and known limitations. The European Commission has published a template for these documentation requirements, providing a standardized format that facilitates both preparation and regulatory review.
Critically, the documentation must include detailed training data summaries — a requirement that has generated significant industry discussion given the scale and complexity of modern training datasets. These summaries must describe the data sources, selection criteria, pre-processing methods, and quality assurance measures applied during training.
Copyright Compliance Obligations
One of the more contentious elements of the GPAI obligations relates to copyright compliance. Providers must implement a policy to comply with the EU's copyright directive, particularly regarding the text and data mining exceptions. This includes making publicly available a sufficiently detailed summary of the content used for training, enabling rights holders to exercise their opt-out rights.
For organizations using third-party GPAI models as components, understanding these upstream obligations is essential. While the primary compliance burden falls on the model provider, deployers must ensure their own use of GPAI models doesn't create additional copyright risks or violate the terms under which the model was made available.
Systemic Risk Assessment for Large Models
GPAI models designated as having systemic risk — generally those trained with computational resources exceeding 10^25 FLOPs — face additional obligations including model evaluations, adversarial testing, incident tracking and reporting, and cybersecurity protections.
The AI Office, operational since August 2025, plays a central role in overseeing GPAI compliance. The AI Board advises on the consistent application of these requirements across the EU, and organizations should monitor both bodies' guidance as the first enforcement decisions will establish important precedents for the industry.
Building Audit-Ready Compliance
For organizations developing or deploying GPAI models, the window between now and August 2026 enforcement should be used to build robust, auditable compliance documentation. This means implementing version-controlled documentation processes, establishing clear ownership of compliance artifacts, and conducting internal audits to identify and address gaps before regulators do.
Arcus provides specialized support for GPAI compliance, including automated documentation generation aligned with Commission templates, continuous monitoring of GPAI-specific regulatory developments, and audit-ready evidence packages that demonstrate compliance across all applicable GPAI obligations.