↑ Resources / Regulatory

DORA and AI Agents: Why Your LLM Provider's Log Doesn't Satisfy Article 30

Regulatory · 4 min read · March 2026

DORA — the Digital Operational Resilience Act — has been in force since January 2025. Financial entities subject to DORA have had time to assess their ICT risk management frameworks. Most have not yet assessed what DORA means specifically for AI agents.

They should. The gap is significant.

What DORA Article 30 requires

DORA Article 30 governs contractual arrangements with ICT third-party service providers. LLM providers — OpenAI, Anthropic, Azure OpenAI, and others — are ICT third-party providers under DORA's definition. The regulation requires financial entities to maintain audit trails of third-party ICT provider interactions that are independent of the provider itself.

The logic is straightforward: an audit trail that depends on the provider's infrastructure and cooperation cannot serve as an independent record during an incident investigation or regulatory review. This is the same principle that prevents an audited company from writing its own audit report.

The structural problem with provider-native logs

When a financial institution uses OpenAI or Anthropic for AI agent operations, those providers offer logging capabilities. The logs are real. The problem is structural.

Provider-native logs live on infrastructure the provider controls. They are signed — if at all — by the provider's own keys. They are accessible through the provider's console, subject to the provider's retention policies, and dependent on the provider's continued cooperation. In a dispute involving the provider, or in an incident where the provider's infrastructure is compromised, those logs offer limited evidentiary value.

DORA's independent audit trail requirement means exactly what it says: the trail must be independent. That requires a governance layer that sits outside the provider relationship — capturing, signing, and preserving records before they reach the provider's infrastructure.

The 72-hour incident reporting problem

DORA Article 23 requires financial entities to report major ICT incidents within 72 hours. For AI agent incidents — an agent that accessed unscoped data, executed an unscoped transaction, or produced a decision that caused financial harm — that 72-hour window requires immediate access to a complete, forensically usable evidence record.

"We're waiting for logs from our LLM provider" is not a viable response to a regulator within a 72-hour window. The evidence needs to exist, independently, before the incident occurs.

What a DORA-compliant AI agent audit trail looks like

Four properties are required:

Independence. The trail must be captured and stored independently of any LLM provider. It cannot depend on provider cooperation or provider infrastructure.

Integrity. The trail must be cryptographically signed at the point of capture. Any modification must be detectable.

Completeness. Every agent action — every model call, every tool invocation, every data retrieval — must be captured. Partial logs do not satisfy incident investigation requirements.

Availability. The trail must be immediately exportable in a machine-readable format for regulatory submission.

The practical implication

Financial entities deploying AI agents for operational functions — credit decisions, customer interactions, fraud detection, claims processing — need a governance layer that satisfies all four properties above. That layer cannot be the LLM provider. It must sit between the agent fleet and the provider, capturing and signing every action independently.

DORA is already in force. The question is not whether you need this infrastructure. It is whether you have it.

↳ KYDE

Kyde sits between your agent fleet and any LLM provider — capturing, signing, and chaining every action into a tamper-evident, provider-independent audit trail that satisfies DORA Article 30 from day one.