Documentation, Guides,
and Research.
Documentation
Full integration guides, CLI reference, and API documentation will be published when KYDE reaches public availability.
The Shadow AI Trilogy. Three interconnected guides that form the complete playbook for discovering, classifying, and governing unscoped AI systems.
View Trilogy Overview →How to Detect Shadow AI
Step-by-step technical methods to identify unscoped AI systems in enterprise networks using DNS rules, SIEM queries, and behavioral patterns.
How to Classify AI Systems Under EU AI Act
Framework for determining if your AI systems are High-Risk, General-Purpose, or Low-Risk. Includes all 37 Annex III categories and obligations.
Shadow AI Governance Checklist
8-phase checklist for implementing AI governance from discovery to compliance readiness. Includes 70+ checkboxes, RACI matrix, and incident response planning.
Regulatory analysis, technical deep-dives, and operational guidance for enterprises deploying AI agents at scale.
Your Employees Are Already Using AI. You Just Don't Know How.
78% of AI users bring their own tools to work. Your employees are not waiting for your AI strategy. They already have one — and the data they're processing with it isn't yours to govern yet.
The End of the App Layer: Why MCP Changes Everything About AI Governance
MCP lets AI agents connect directly to enterprise systems — bypassing the application layer that was always the implicit governance control point. The app layer doesn't get rebuilt. It gets bypassed. Something needs to replace the control point it represented.
What Happens When an AI Agent Gets Compromised — And Nobody Has the Logs
ForcedLeak demonstrated prompt injection against production enterprise agents in 2025. The most important question it raises isn't technical — it's operational. If this happened in your environment, would you know?
Shadow AI Is Already in Your Production Systems — You Just Can't See It
Shadow AI is the same problem as Shadow IT — at a different order of magnitude. Every LLM call that touches enterprise data without logging, attribution, or scope is a liability accumulating in silence.
The Missing Layer in Every Agent Architecture
The distinction between agent core and agent harness cuts to the heart of what most enterprise deployments get wrong. Single-user architecture breaks at scale in four predictable ways. The harness isn't an add-on — for enterprise, it's the product.
HBR Just Described the Problem. Here's the Infrastructure That Solves It.
Harvard Business Review identified four frictions that derail enterprise AI agent deployments: identity, context, control, and accountability. The article stops short of specifying what the infrastructure layer looks like. That's what we build.
DORA and AI Agents: Why Your LLM Provider's Log Doesn't Satisfy Article 30
DORA is already in force. Financial entities using AI agents for operational functions have a specific problem: vendor-provided logs don't constitute an independent audit trail. Here's why — and what does.
What the EU AI Act Actually Requires for Audit Trails — And What Most Enterprises Are Missing
The enforcement deadline is August 2, 2027. Most enterprises assume their LLM provider's logs will be sufficient. They won't be. Here's what the regulation actually demands — and where the gaps are.
The frameworks KYDE is designed to address.
EU AI Act
Risk-based framework for AI systems. High-Risk AI System logging requirements.
Enforcement: August 2, 2027
NIS-2 Directive
Network and Information Security for essential and important entities.
In force: December 2025
DORA
Digital Operational Resilience Act for financial sector entities.
In force: January 2025
GDPR
Art. 22 covers automated decision-making. Art. 35 requires DPIA for high-risk processing.
In force