Home - Teleport Blog - EU AI Act Compliance: Requirements, Risks, and What to Document
EU AI Act Compliance: Requirements, Risks, and What to Document

About the author: Kayne McGladrey is a CISSP, Former Defense Industrial Base CISO, senior IEEE member, and cybersecurity strategist with extensive experience in governance, risk, and compliance programs. Kayne advises organizations on aligning emerging technologies, including AI systems, with established security frameworks such as SOC 2, ISO 27001, and NIST.
Key takeaways for EU AI Act compliance
→ Audit your AI systems against EU AI Act requirements now — validate Annex IV technical documentation, logging, and data governance. The initial August 2025 compliance date has passed, and full penalties begin in August 2026.
→ Build a continuous compliance evidence chain — document risk management across the full lifecycle (design, development, deployment, and post-market monitoring).
→ Establish full traceability and reproducibility — implement dataset versioning, model lineage tracking, and logging of inputs, outputs, and decision points.
→ Fix high-risk lifecycle gaps proactively — address documentation handoff failures, missing monitoring plans, and unclear contractual responsibilities.
→ Operationalize monitoring and human oversight — deploy real-time monitoring, feedback loops, and intervention controls to actively manage risk in production.
The shift from promise to proof
The EU AI Act ("the Act") entered staged implementation in 2024, with broader enforcement scheduled to start in August 2026. The regulation applies extraterritorially, meaning it affects providers and deployers located outside the European Union if they place AI systems on the EU market or if the output generated by these systems is used within the EU.
The phased implementation timeline creates immediate pressure on providers to demonstrate adherence to specific articles of the regulation. Key milestones include:
- August 2025: Rules for general-purpose AI models became applicable, requiring technical documentation and copyright compliance
- August 2026: The bulk of the Act takes effect, including strict requirements for high-risk AI systems under Annex III and transparency obligations for AI-generated content
- August 2027: Regulations extend to high-risk AI embedded in regulated products under Annex I
This guide is for compliance officers, technical leads, CISOs, and their legal advisors preparing for increased regulatory scrutiny. Organizations must prepare for potential reviews of their risk management systems, data governance, and cybersecurity measures. Failure to provide adequate documentation may result in significant administrative fines, making the preparation of sufficient evidence a top priority for legal and technical teams alike.
The anatomy of a conformity assessment under the EU AI Act
As outlined in Article 43, a conformity assessment under the EU AI Act is a mandatory, structured evaluation process verifying that high-risk AI systems comply with safety, transparency, data governance, and technical standards before entering the market.
Provider obligations and risk management
A conformity assessment process evaluates a comprehensive evidence chain rather than isolated documents. The provider bears the primary responsibility for proving compliance before placing a high-risk artificial intelligence system on the market.
This obligation stems from Article 16 and Article 17 of the Act. Providers must establish, implement, document and maintain the risk management system required by Article 9. Notified bodies (independent organizations designated by EU member states to assess conformity) carry out conformity assessment activities, while national market surveillance authorities handle oversight and enforcement.
Post-market monitoring and incident reporting
Evidence of compliance must show regular updates driven by post-market monitoring data as required under Article 72 and, where applicable, by serious-incident reports subject to the reporting obligations in Article 73. The documentation needs to demonstrate a lifecycle approach that spans from design and development through deployment and post-market monitoring.
This continuous loop ensures that risks are identified and mitigated promptly. As such, a static policy document would fail to meet these regulatory standards.
Dataset quality, provenance, and traceability
Data governance under Article 10 requires proof that datasets are relevant, representative, complete, and free of errors to the best extent possible. Acceptable evidence includes:
- Documented bias-detection and mitigation measures
- Records of data-preparation steps, such as cleaning and labelling
- Descriptions of identified data gaps or limitations when relevant
Article 10 further requires providers to maintain version control records and provenance information that enable traceability between datasets and model versions. This documentation must be producible as evidence during compliance assessments.
Technical documentation
Technical documentation serves as the cornerstone of the conformity assessment under Article 11. This documentation must allow authorities to assess compliance without trying to reverse-engineer the model. Annex IV of the EU AI Act defines the specific content that technical documentation for high-risk AI systems must contain.
Logging and traceability
Automatic logging requirements under Article 12 demand that high-risk systems record events to identify risks and substantial modifications. Assessors expect logs that capture inputs, outputs, and decision points to allow for full traceability. The logged events must also support post-market monitoring and risk identification, which can help investigations and reconstruction of a system's functioning when relevant. Without such logs, verifying compliance becomes nearly impossible.
Teleport's take:
Logged events that cannot be attributed to authorized identities are difficult to substantiate. Assigning hardware-backed, cryptographic identities to non-deterministic AI actors provides the visibility needed to support full traceability and reconstruction of access paths within infrastructure.
Interpretable outputs and human oversight
Finally, Articles 13 and 14 require high-risk AI systems to be designed and documented so deployers can interpret outputs and intervene when necessary, and this information must be available to support conformity assessments.
Documentation must describe the human oversight measures and technical provisions to allow for interpretation of outputs and the deployer's ability to interrupt or stop the system. This is a strong evidence chain that validates the safety and reliability of the artificial intelligence system.
How conformity assessments evaluate EU AI Act compliance
A conformity assessment determines whether a high-risk AI system meets EU AI Act requirements before it is placed on the market. It evaluates a complete evidence chain, rather than isolated documentation.
Core components include:
- Risk management → defined, tested, and continuously updated controls
- Data governance → documented datasets, provenance, and bias mitigation
- Technical documentation → audit-ready documentation aligned to Annex IV
- Logging and traceability → records of inputs, outputs, and decisions
- Human oversight → interpretable systems with the ability to intervene
- Post-market monitoring → ongoing monitoring and incident reporting
Common EU AI Act compliance gaps across the AI lifecycle
Organizations developing or deploying artificial intelligence (AI) systems under the Act frequently run into or create compliance gaps at critical lifecycle stages. Identifying these common gaps helps organizations avoid enforcement actions and demonstrate regulatory readiness.
| Phase | Gap Type | Primary Risk |
|---|---|---|
| Development | Lack of data rationale and version control | Inability to reproduce results or trace bias sources |
| Integration | Broken chain of custody for documentation | Downstream providers unaware of upstream obligations |
| Open Source | Misunderstanding of license exemptions | Systemic risk models treated as exempt from compliance |
| Deployment | Absence of post-market monitoring plans | Reactive compliance rather than proactive risk management |
| Procurement | Failure to use Model Contractual Clauses | Undefined liability and unclear data governance rights |
The development phase gap: Rationale and version controls
During AI development, teams often fail to document their rationale behind data selection or parameter tuning.
A significant gap involves the lack of version control for training datasets, which makes it impossible to reproduce results or trace bias sources later. Article 10 of the EU AI Act requires data governance practices ensuring that training, validation, and testing datasets are relevant, sufficiently representative, and to the best extent possible, free of errors and complete in terms of the intended purpose.
While development teams focus on internal data governance, the transition to integration introduces distinct responsibilities regarding the handoff of technical documentation. Upstream providers must ensure that downstream modifiers receive sufficient information to meet their own compliance obligations, particularly when modifications trigger a shift in provider status.
The integration phase gap: Chain of custody and documentation
When general-purpose AI (GPAI) models are integrated into high-risk downstream systems, the chain of custody for documentation often breaks. Upstream providers may fail to supply sufficient technical documentation to downstream providers, violating obligations for GPAI models.
Under the EU AI Act, if a downstream actor fine-tunes or modifies an existing GPAI model in a way that leads to a significant change in the model's generality, capabilities, or systemic risk, that downstream modifier becomes the provider of the modified GPAI model. The Commission presumes such a change occurs when the training compute used for modification exceeds one-third of the original model's training compute, or one-third of 1023 FLOPs when the original compute is unknown.
This presumption is rebuttable with evidence. This shift in provider status triggers documentation and compliance duties related to the specific modification. For standard GPAI models, obligations are limited to the scope of the change rather than the entire underlying model. However, if the modified model qualifies as having systemic risk, obligations may extend beyond the modification itself.
The open source exception gap: Licensing
Providers often misunderstand the free and open-license exemption.
The Act clarifies that this exemption is unavailable to GPAI models with systemic risk. Models trained with cumulative compute exceeding 1025 floating point operations per second (FLOPs) face full compliance obligations regardless of license type. This threshold ensures that even open-source models with significant societal impact undergo rigorous evaluation and risk mitigation.
The deployment phase gap: Post-market monitoring
Deployers frequently lack tools to monitor system drift or detect anomalies in real-time. The absence of post-market monitoring plans required by Article 72 for providers leads to reactive rather than proactive compliance. While deployers must monitor system operations under Article 26 and provide feedback to providers, the formal post-market monitoring plan obligation rests with the providers of high-risk AI systems.
Documented procedures for collecting feedback, analyzing incidents, and implementing corrective actions represent the necessary oversight documentation. Without these mechanisms, organizations cannot demonstrate that they are actively managing risks as the system operates in dynamic environments.
Teleport's take:
Proactive compliance at the deployment phase depends on knowing what every identity is doing as it happens, not after an incident occurs. Behavior monitoring across the full identity chain, with session context, risk signals, and timeline clarity, supports post-market monitoring by providing the operational context to shift from a documentation exercise to an active risk management practice.
The procurement phase gap: Contractual ambiguity
Without standardized contractual frameworks, organizations may face challenges in defining liability and data governance rights between suppliers and buyers.
The Model Contractual Clause AI templates offer one approach to addressing these issues. These clauses establish a common minimum standard of obligations, helping both parties align on transparency, risk management, and accountability. Adopting these clauses helps organizations proactively address potential risks and responsibilities before disputes arise.
How to address AI lifecycle gaps
- During development: Document data decisions early by recording dataset sources, assumptions, and version changes from the start.
- During integration: Maintain documentation across handoffs, ensuring downstream teams receive complete technical and compliance information.
- During model evaluation: Validate open source usage to confirm whether models meet systemic risk thresholds before assuming exemptions.
- During deployment: Implement real monitoring in production to track performance, detect drift, and define incident response workflows.
- During all phases: Define ownership and contracts. Assign responsibility for compliance, data governance, and model changes.
The EU AI Act evidence chain for agentic systems: Explainability, traceability, and verification
Solving the black box problem
High-risk AI systems under the Act must support human oversight, which requires that qualified individuals can interpret system outputs and understand capabilities and limitations. Article 14 further mandates that providers design systems so designated individuals can intervene when necessary.
But deep learning models often function as black boxes, producing outputs without inherently interpretable reasoning pathways.
Providers should document their explanation methods, such as attention visualizations or feature attribution techniques, along with uncertainty estimates where relevant to the system's risk profile. The technical documentation must also include fallback procedures for scenarios where the system exceeds predefined risk thresholds or operates outside safe parameters. Risk management measures such as anomaly detection and documented escalation paths support compliance with the continuous risk management obligations under Article 9.
Traceability in agentic workflows
Agentic AI systems, which autonomously execute multi-step tasks or chain multiple tools, present unique traceability challenges.
Logging of agentic workflows must capture events relevant to risk identification and system monitoring rather than only final outputs. This evidence should include timestamped logs capturing relevant parameters, inputs, outputs, and event descriptions. Runtime event logs and user interaction logs are critical evidence types, and these records should capture user or operator identity along with relevant event descriptions.
Without system-level logging that traces outcomes back to specific inputs, providers may struggle to demonstrate compliance during regulatory audits. Comprehensive logging allows investigators to reconstruct the exact sequence of events leading to a specific outcome, which is essential for liability determination and safety improvements. These records must be retained for a minimum of six months, unless another applicable Union or national law requires a longer retention period.
Teleport's take:
Multi-step agentic tasks create unique traceability challenges when system-level logging cannot attribute outcomes to specific actors. Securing agents with attestable workload identities and full identity chain mapping enables precise attribution of decisions, which is essential to meet the logging and traceability requirements for demonstrating compliance.
Data lineage and copyright compliance
For general-purpose AI (GPAI) models, Article 53 requires providers to publish a sufficiently detailed summary of training content and establish a copyright policy. Providers must comply with the 2019 EU Copyright Directive, including respecting opt-out mechanisms for text and data mining.
This documentation should include structured summaries of data sources and indicate whether data was obtained through licensing, web scraping, or other means. The Code of Practice requires providers to respect machine-readable opt-outs including robots.txt protocols and to ensure they have lawful access to content when crawling the web for training data. Maintaining these records demonstrates good faith efforts to respect intellectual property rights and provides a clear audit trail for regulators to verify that training data was lawfully obtained and processed.
Technical verification
For high-risk AI systems under the EU AI Act, conformity assessment bodies must verify that documented results can be independently reproduced. However, non-deterministic model behavior and the reliance on proprietary infrastructure tends to complicate this requirement.
Providers of high-risk AI systems must document their hardware specifications, software dependencies, and training configurations before placing the system on the market. This documentation enables auditors to understand development conditions and assess conformity. Using version-controlled environments and maintaining performance documentation would help to demonstrate compliance.
How to align agentic systems with EU AI Act standards
→ Document how agents generate outputs. Record explanation methods, uncertainty levels, and known limitations for each decision.
→ Enable human oversight by design. Ensure operators can interpret outputs and intervene, interrupt, or stop agent actions.
→ Log all agent activity. Capture inputs, outputs, decision points, timestamps, and user or operator interactions.
→ Track multi-step agent workflows. Record tool use, intermediate steps, and full execution paths across systems.
→ Maintain data lineage and copyright compliance. Document data sources, access methods, and adherence to opt-out and licensing requirements.
Conclusion: Building a culture of verifiable compliance
The EU Artificial Intelligence Act is a fundamental shift from theoretical safety to demonstrable proof of compliance. Organizations treating documentation as an engineering afterthought risk severe enforcement actions and market exclusion. Conversely, those embedding evidence generation directly into the development lifecycle secure sustainable market access.
Organizations should prioritize the following actions:
- Immediate (April to June 2026):
- Audit existing documentation against Annex IV requirements.
- Identify gaps in technical documentation, logging infrastructure, and data governance records.
- Maintain and update version control records for training datasets as systems are updated and new data is incorporated.
- Near-term (July to August 2026):
- Finalize risk management system documentation demonstrating iterative updates.
- Implement automatic logging for high-risk systems and prepare human oversight protocols and instructions for use.
- Ongoing (Post-August 2026):
- Establish post-market monitoring procedures per Article 72.
- Integrate post-market monitoring findings into risk management updates.
Intentional compliance shifts compliance with the Act from a regulatory burden into a competitive advantage that helps to build trust with both regulators and end users.
Related articles
→ Preparing for the Cyber Security and Resilience Bill
→ Exploring DORA Compliance in Practice
→ NIS2 is Here: What Happens Next?
→ Accelerate Compliance with Teleport
Table Of Contents
- The shift from promise to proof
- The anatomy of a conformity assessment under the EU AI Act
- Common EU AI Act compliance gaps across the AI lifecycle
- The EU AI Act evidence chain for agentic systems: Explainability, traceability, and verification
- Conclusion: Building a culture of verifiable compliance
- Related articles
Teleport Newsletter
Stay up-to-date with the newest Teleport releases by subscribing to our monthly updates.
Tags
Subscribe to our newsletter

