↑ Resources / Thought Leadership

HBR Just Described the Problem. Here's the Infrastructure That Solves It.

Thought Leadership · 5 min read · March 2026

In March 2026, Harvard Business Review published a piece by researchers from Carnegie Mellon and the University of Pittsburgh with a deceptively simple argument: deploying AI agents is not a software installation. It's a workforce management decision.

The article identified four recurring frictions that derail enterprise agent deployments at scale: identity, context, control, and accountability. It's worth reading in full. What it stops short of is specifying what the infrastructure layer that addresses all four actually looks like.

That's what we build.

On Identity

The HBR piece describes a customer service agent operating through a shared service account — no traceable identity, no role boundary, no scope limit. The result: a refund scoped for €500 gets issued for €5,000 because nobody scoped the agent's authority.

The authors' prescription is correct: every agent needs a distinct identity, narrowly scoped permissions, and a traceable action log. What they don't describe is how to enforce that across a heterogeneous fleet — agents running on different frameworks, different providers, different machines — without touching every agent's code.

The answer is a governance proxy. One deployment point. Every agent in your fleet gets an identity, a role, and a scope — enforced before the call reaches the LLM provider. The agent sees no difference. Your audit trail sees everything.

On Context

The article describes an HR agent retrieving a 2022 policy document and using it to guide a termination process — the current policy said something different. Nobody logged which source the agent relied on.

This is precisely why causal context capture matters. It's not enough to log what an agent did. You need to log why — the retrieval context, the last messages before every tool call, the data sources the agent accessed in the moments before it made a decision.

When that termination is challenged, the question won't be "did the agent take an action." It will be "what was the agent shown, and when." If you can't answer that, you can't defend the decision.

On Control

The authors make a point that most enterprises learn the hard way: deterministic controls need to wrap probabilistic systems. A model that passes your test suite today may behave differently tomorrow. In multi-agent environments, a bad output from one agent becomes an executable instruction for the next.

The structural answer is a policy enforcement point — a layer between the agent and the systems it acts on that validates every proposed action against hard rules before execution. Not guardrails in the model. Deterministic enforcement outside it.

That's not a feature you configure in your LLM provider's dashboard. It's infrastructure.

On Accountability

The article ends where the legal exposure begins. It cites Moffatt v. Air Canada — a tribunal rejecting the airline's argument that its chatbot was a separate legal entity — as an early signal of how accountability will be assigned.

Their conclusion: organizations must maintain comprehensive records of how agents operate, enabling reconstruction of the full decision chain for any challenged outcome.

The word "comprehensive" is doing a lot of work there. A log that can be altered after the fact is not a record. It's a liability. Tamper-evidence is not a feature request — it is the architectural property that determines whether your evidence trail holds up under scrutiny or collapses under it.

The Gap Between Prescription and Infrastructure

The HBR article is excellent at naming the problem. What it leaves open is the implementation question: how do you actually deploy identity, causal context, deterministic control, and tamper-evident accountability — across every agent, every provider, every framework — without making it an engineering project every time you add an agent?

The answer is a governance layer that sits in the data path. One proxy. Every agent routes through it. Identity is registered. Context is captured. Policy is enforced. Every action is signed and chained before it reaches the provider.

The authors frame it as a management challenge. It is. But management challenges at machine speed require machine-speed infrastructure.

↳ KYDE

Kyde is the governance proxy that gives every AI agent an identity, captures causal context per decision, enforces policy deterministically, and produces a tamper-evident evidence trail — across every provider, without touching agent code.