The EU AI Act enforcement deadline for High-Risk AI systems is August 2, 2027. Most enterprises deploying AI agents in regulated environments believe they have time. They also believe their existing logging infrastructure — or their LLM provider's audit console — will be sufficient when that deadline arrives.
Both assumptions are wrong.
What the regulation actually says
Articles 12 and 26 of the EU AI Act establish specific requirements for High-Risk AI systems. The key obligations are not vague:
- Automatic logging of events throughout the system lifecycle
- Logs must be retained for a minimum of six months
- Logs must enable human oversight and post-hoc review
- For credit, insurance, and employment decisions: the log must capture enough context to explain the decision
Article 14 adds a human oversight requirement that has teeth: enterprises must be able to meaningfully intervene in or override AI decisions. That's impossible without a complete causal record — not just what the agent did, but what it was shown, what it retrieved, and why it decided what it decided.
Where most enterprises currently stand
Most AI deployments today rely on one of three logging approaches — all of which fall short:
Provider-native logs. OpenAI, Anthropic, and Azure all offer some form of request logging. The problem is structural: a log that lives on the provider's infrastructure, signed by the provider's keys, and accessible only through the provider's console is not an independent audit trail. It is a vendor report. Regulators and courts treat them differently.
Application-layer logging. Many teams log at the application level — capturing inputs and outputs in their own database. This is better than nothing, but software-layer logs can be altered. A compromised host OS can rewrite them silently. They have no cryptographic integrity guarantee.
No logging. A significant portion of enterprise AI deployments have no structured audit trail at all. Agents run on shared service accounts with no traceable identity.
What "tamper-evident" actually means
The EU AI Act doesn't use the phrase "tamper-evident" directly — but the requirement is implicit. A log that can be altered after the fact does not satisfy the regulation's intent. When BaFin or the European Commission asks for the audit trail from a challenged automated decision, "we had logs but they may have been modified" is not an acceptable answer.
Tamper-evidence requires cryptographic integrity at the point of capture. Every entry must be signed at the moment it is written — and each entry must be chained to the previous one, so that any alteration is detectable. This is not a software configuration. It is an architectural property.
The causal context requirement
This is the gap most enterprises underestimate. Article 14's human oversight requirement means that for every challenged AI decision, you need to reconstruct the full decision chain: what documents or data the agent retrieved, what the model was shown, what it returned, and in what sequence.
A log that records inputs and outputs is not sufficient. You need causal context — the state of the conversation and retrieval context at the moment of every tool call and decision.
What to do before 2027
Three things every enterprise deploying High-Risk AI systems should address now:
First, audit your current logging infrastructure against the specific requirements of Articles 12, 14, and 26. Most will find gaps.
Second, ensure your audit trail is architecturally independent of your LLM provider. Independence is not a feature you can request from your provider — it requires a separate governance layer.
Third, verify cryptographic integrity. If your logs can be modified without detection, they will not withstand regulatory scrutiny.
2027 is closer than it looks. The enterprises that treat audit infrastructure as an afterthought will build it under pressure. The ones that treat it as operational infrastructure will have it before they need it.
↳ KYDE
Kyde is a model-agnostic governance proxy that produces tamper-evident, cryptographically signed audit trails for every AI agent action — across every provider, from day one.