Teleport Launches Beams — Trusted Agent Runtimes For Infrastructure
Learn More
Teleport logoGet a Demo

Home - Teleport Blog - Guide: DORA Compliance Evidence for Agentic AI

Guide: DORA Compliance Evidence for Agentic AI

by Kayne McGladrey Apr 27, 2026

DORA Compliance Evidence for Agentic AI Blog Header Image

Read this article to learn:

→ What DORA assessors actually evaluate
→ How DORA controls map to specific evidence requirements
→ Common evidence gaps that can interfere with audits
→ The evidence challenges of agentic AI
→ The full blueprint for DORA compliance now and in the future

The Digital Operational Resilience Act (DORA), otherwise known as Regulation (EU) 2022/2554, represents a fundamental shift in how financial institutions must show their compliance.

DORA isn't a checklist of policies, but a mandate for demonstrable operational resilience. The regulation demands continuous monitoring and integrity-protected audit trails, moving the burden of proof from having a plan to proving that it works in real time. Enforcement of DORA began on January 17, 2025, and National Competent Authorities (NCAs) are currently in the active supervision phase.

That supervision coincides with an added complication: agentic AI introduces dynamic, non-deterministic behaviors that challenge traditional evidence collection methods. Static screenshots of policy documents are no longer sufficient.

This article defines how assessors evaluate design and operating effectiveness, identifies common evidence gaps, analyzes the unique challenges of AI agents, and prescribes the specific data artifacts to help you prove DORA compliance and satisfy DORA's five pillars.

Control design vs. operating effectiveness: What DORA assessors evaluate

Defining the two pillars of validation

Assessors evaluate DORA compliance through two distinct lenses:

  1. Design effectiveness, which asks whether the control architecture theoretically meets DORA requirements. For example, is there a documented ICT risk management framework approved by the management body?
  2. Operating effectiveness, which asks whether the control functions consistently over time under real-world conditions. For example, do logs show the framework was actually executed and monitored?

Mapping evidence to DORA requirements

As the following table illustrates, each of the five DORA pillars requires different types of evidence:

DORA pillarRequired evidencePurpose
Pillar 1 (ICT Risk Management)The management body actively oversees and approves the ICT risk management framework per Article 5: Governance and organisation, while Article 6: ICT risk management framework requires the framework to be updated after incidents occur.A policy document validates the design, and meeting minutes and version-controlled updates prove the framework is functioning in practice.
Pillar 2 (Incident Management)The four-hour initial notification rule in Article 19: Reporting of major ICT-related incidents, which requires proof that the reporting clock starts immediately upon classification.Timestamped logs pinpointing the exact moment an incident is deemed major and the resulting activation of the reporting workflow, whether automated or manual.
Pillar 3 (Resilience Testing)Advanced testing obligations under Article 26: Advanced testing, specifically mandating Threat-Led Penetration Testing (TLPT) for critical entities.Proving that test scenarios mirror real-world threats, findings are rigorously documented, and remediation plans are executed to eliminate identified gaps.
Pillar 4 (Third-Party Risk)Maintain a continuously updated Register of Information (as outlined in Article 28).Illustrate regular updates triggered by new contract signings or alterations to subcontracting agreements to ensure the register remains current.
Pillar 5 (Information Sharing)Voluntary exchange of cyber threat intelligence encouraged by Article 45: Information-sharing arrangements.Active participation in trusted communities and following established data protection safeguards throughout the information exchange process.

Understanding the "continuous" mandate

DORA Article 13 mandates continuous monitoring of digital operational resilience strategy effectiveness rather than relying solely on periodic assessments for demonstrating ongoing compliance.

This means that controls like access reviews or vulnerability scans must run at least annually for systems supporting critical or important functions. Evidence must also demonstrate detection mechanisms that trigger incident response processes in accordance with DORA Article 10: (Detection)'s requirements. Testing independence is also required. As Article 25's ICT tool and system testing guidelines outlines, testing must be performed by independent parties, including external specialists or internal teams with appropriate segregation of duties.

→ Watch: Don’t Be Afraid of DORA: Future-Proof Against Compliance Chaos

Common evidence gaps — and where DORA compliance fails

The "set it and forget it" trap in access control

Organizations often grant permanent administrative privileges, assuming that periodic reviews satisfy DORA. However, this creates a significant gap. Permanent access produces voluminous, unanalyzable audit trails. Assessors cannot distinguish between routine maintenance and suspicious activity if every action is performed with elevated privileges.

For example, a specific failure point is the lack of justification for why a user had admin rights at a specific timestamp. If a user accesses an important database at 3 PM for a task already completed at 9 AM, the gap is the inability to link the access to a specific business need.

“Paper tigers:” Written policies without control enforcements

Many compliance policies exist, but are not enforced by technical controls. For example, a policy stating that least privilege is mandatory is meaningless if the technical configuration allows users to escalate privileges without approval.

To account for this, assessors will cross-reference policy text with system configurations and access logs.

Fragmented logging and the black box problem

Logs stored in disparate silos across network, application, and identity systems cannot be correlated — and under DORA, that gap has regulatory consequences. DORA requires a holistic view of ICT incidents. If an incident spans multiple systems, fragmented logs prevent the reconstruction of the attack chain, violating Article 10's detection requirements and the requirement for detailed major incident reporting in Article 19.

Third-party blind spots compound this problem. Many entities track internal users but lack visibility into how third-party vendors access their systems, failing the Register of Information and sub-contracting oversight requirements.

Organizations should prioritize identity-traceable audit events and unified logging before addressing supplementary compliance items to ensure high priority gaps are closed first.

Recap: Top DORA evidence gaps

  1. Permanent privileges with no business-need justification per access event
  2. Fragmented logs that cannot reconstruct cross-system incident chains
  3. No visibility into third-party vendor access paths
  4. Policies not enforced by technical controls

Agentic AI challenges with DORA evidence requirements

Agentic AI systems differ from traditional software because they are dynamic and non-deterministic: they plan, iterate, and execute tasks autonomously with limited human involvement. Unlike traditional AI systems that simply generate an output, agentic AI systems can autonomously execute multi-step tasks to achieve a goal.

This introduces several risks, including:

  • Erroneous actions such as incorrectly scheduling appointments or producing flawed programming code
  • Unauthorized actions that exceed the agent's permitted scope
  • Biased or unfair decisions
  • Data breaches through inadvertent disclosure of sensitive information (including via prompt injection)
  • Disruption to connected systems such as deleting a production codebase or overwhelming external systems

DORA establishes ICT risk management requirements that apply to all digital systems used by financial entities, including new technologies like agentic AI. Organizations must assess the risks that agentic systems create against DORA's general resilience requirements, and understand that existing governance frameworks may need additional interpretation.

Identity attribution challenges

Identity attribution presents a core challenge under DORA. Traditional identity and access management (IAM) or privileged access management (PAM) typically links actions to only human users. However, because AI agents often operate under service accounts or using shared credentials, many authentication systems designed for humans do not easily translate to complex systems like AI agents — leaving gaps in identity visibility and attribution accuracy in logs.

If an agent deletes a production database (a worst-case scenario), logs must be able to prove which agent acted, what plan the agent generated, and who authorized the deployment. The black box execution problem compounds this; if an agent decides to call an API it was not explicitly programmed to use, standard logs may only show the API call, not the reasoning. Compliance with DORA requires understanding and proving the why behind ICT incidents, not just the end result.

Agentic speed vs. human oversight

Speed versus oversight creates additional gaps. While agents operate at machine speed, companies may struggle to keep up with real-time intervention. Even human oversight, a core DORA requirement, becomes impossible if the system does not pause for approval on high-risk actions.

Evidence gaps emerge when agents execute irreversible actions without a human-in-the-loop checkpoint.

Multi-agent “cascading failures”

In multi-agent environments, a single error can propagate rapidly. Complex interactions among agents substantially increases the risk of more unpredictable outcomes. Therefore, evidence must capture the interactions that occur between agents, not just individual actions.

For example, a log showing "Agent A failed" is insufficient. Instead, evidence should show how "Agent B" (or other downstream agents) reacted to these upstream failures, and whether or not the wrong output was passed on to other agents. Otherwise, a mistake by one agent may produce a cascading effect if the erroneous output propagates across the system.

AI hallucinations

While DORA does not use the term hallucination directly, AI frameworks identify it as a common cause of erroneous actions. For example, an agent may hallucinate by making a wrong plan to complete a task, calling non-existent tools, or calling actual tools in a biased manner.

To satisfy DORA's ICT risk management obligations, evidence must show mitigation of these errors.

How to address DORA evidence gaps

  1. Implement human-in-the-loop checkpoints for high-risk actions to ensure oversight remains feasible at machine speed
  2. Capture agent decision logs that record the full reasoning chain, including the plan generated before execution and the specific tools selected
  3. Establish agent-specific audit trails that link every action back to the initiating prompt and the authorization that granted the agent its permissions

The blueprint for DORA compliance

Identity-native audit events

Every action taken by a human or an agent must be linked to a unique, immutable identity (such as a cryptographic identity).

Organizations should also implement just-in-time (JIT) access for all human and non-human users. This evidence must show:

  • Specific access requests
  • The human approver or automated policy trigger
  • The exact duration of the elevation
  • Specific commands executed during the elevated period
  • Automatic revocation of privileges

This satisfies DORA's Article 9 (Protection and Prevention) and Article 10 (Detection) by demonstrating least privilege and establishing a clear audit trail for every elevated action.

Session recordings and contextual logs

For critical functions, logs must include context, not just metadata. For AI agents, logs must capture the thought process or the plan generated before execution. For humans, session recordings of administrative actions are becoming the gold standard for proving intent and verifying that actions matched the approved scope.

This supports Article 17 (Incident Management) by allowing rapid root cause analysis, and Article 20 (Reporting Templates) by providing the granular data needed for regulatory submission.

Unified access logs and integrity protection

A single source of truth must aggregate logs from all ICT systems, including third-party interfaces. This can be partially achieved by centralizing logs on immutable media to ensure integrity and protection against unauthorized modification.

However, the system must also correlate events across the entire stack including network, host, application, and identity. This supports the Register of Information requirements under Article 28(3) for ICT third-party service provider documentation, and processing integrity of the ICT risk management framework per Article 6.

Automated proof of continuous monitoring

Aggregated data alone is insufficient; the system must also demonstrate that monitoring is active and effective.

Deploying automated dashboards that show real-time health checks of monitoring agents is one way to begin proving continuous monitoring capabilities. However, if a monitoring tool goes offline, the system must generate an immediate alert logged as a critical incident. This directly addresses the continuous monitoring requirement of Article 10, and the testing obligations defined in Article 24.

Reporting precision

The four-hour initial notification clock starts when the entity classifies the incident as major, and an intermediate report must be submitted within 72 hours. This intermediate update provides the current status and a preliminary root cause analysis. Evidence must show timestamps for both awareness and classification events.

The following table highlights several relevant DORA evidence requirements:

Evidence ExampleDORA Article(s)Purpose
JIT access logs9, 10Prove least privilege and audit trail
Session recordings17Enable root cause analysis and reporting
Immutable storage6, 12, 28Ensure log integrity and Register of Information
Health check alerts10, 24Demonstrate continuous monitoring
Classification timestamps18Validate incident reporting timelines
Agent-specific telemetry9, 10Link actions to prompts and authorization
Third-party access logs28Verify vendor access and sub-contracting chains
TLPT documentation26Prove realistic testing scenarios and remediation

Conclusion: Build a future-proof audit trail for AI and beyond

DORA compliance requires both proper documentation and comprehensive data generation. The gap between policy and practice can be bridged by rigorous, automated evidence collection alongside documented ICT risk management frameworks.

But as agentic AI continues to redefine modern operations, the definition of sufficient evidence must similarly modernize. Organizations that adopt JIT access, unified logging, and agent-specific telemetry today will not only survive the next NCA audit, but will also achieve longstanding operational resilience.

Begin by mapping your current evidence collection against DORA's requirements, and whether or not your infrastructure is able to generate this evidence. If your logs cannot reconstruct a specific agent's decision or a human's elevated session, you have a compliance gap that needs immediate remediation.

About the author: Kayne McGladrey is a CISSP, Former Defense Industrial Base CISO, senior IEEE member, and cybersecurity strategist with extensive experience in governance, risk, and compliance programs. Kayne advises organizations on aligning emerging technologies, including AI systems, with established security frameworks such as SOC 2, ISO 27001, and NIST.


Simplify DORA evidence collection for humans, machines, and AI

Teleport's unifies audit trails across every resource and attributes every session to a real identity, accelerating DORA compliance and reducing audit prep work by up to 80% with:

  • Human, machine, and AI identities under the same audit trail
  • Zero standing access and JIT privileges to eliminate control gaps
  • Continuously generated evidence, exportable on demand
  • And more

Exploring DORA Compliance in Practice
How to Simplify DORA Compliance
Digital Operational Resilience Act (DORA): Navigating Compliance with Teleport
Don’t Be Afraid of DORA: Future Proof Against Compliance Chaos
EU AI Act Compliance: Requirements, Risks, and What to Document

background

Subscribe to our newsletter

PAM / Teleport