↑ Resources / Technical Guide

Shadow AI Governance Checklist

Implementation · Compliance · 35 min read · March 2026
8-Phase Framework 70+ Checkboxes RACI Matrix

Governance is the step after detection and classification. This checklist walks you through the process of taking a shadow AI system you've discovered and making it compliant with EU AI Act and operational best practices.

The Governance Lifecycle

This checklist covers 8 phases:

  1. Discovery & Assessment — Document the shadow AI system
  2. Classification & Legal — Determine its risk level
  3. Governance Implementation — Build audit trail, policy, enforcement
  4. Performance Monitoring — Test for bias and drift
  5. Compliance Documentation — Prepare for regulation
  6. Regulatory Readiness — Get sign-off and incident response ready
  7. Ongoing Compliance — Monthly/quarterly/annual reviews
  8. Continuous Improvement — Share learnings across organization

Timeline: 4-6 months from discovery to compliance-ready.

Phase 1: Discovery & Initial Assessment

Step 1.1: Document the Shadow AI System

Create a system record with:

  • System name/description
  • Owner/responsible team
  • When discovered (date)
  • Detection method (DNS logs, firewall, employee report)
  • Date of first deployment (if known)

Step 1.2: Understand System Purpose & Scope

  • What does this system do?
  • What decisions does it make?
  • How many employees/customers use it?
  • How frequently is it used?
  • What data does it process? (PII? Health? Financial?)

Step 1.3: Preliminary Risk Screening

  • Does it process personal data?
  • Does it make automated decisions affecting individuals?
  • Is it in a regulated industry?
  • Could it cause harm if it fails?

Phase 2: Classification & Legal Assessment

Step 2.1: Formal Classification

Use the AI Classification Guide (see related guides) to determine:

  • Is it prohibited? (If yes, discontinue immediately)
  • Is it High-Risk? (If yes, full governance required)
  • Is it General-Purpose? (Limited transparency obligations)
  • Is it Low-Risk? (Standard data protection)

Step 2.2: Substantial Modification Assessment

Check if your governance approach could accidentally make you the provider:

  • Will you modify prompts? Will you filter outputs?
  • Will you change model weights? Will you add training data?
  • If all are NO → Safe to proceed as deployer
  • If any are YES → Risk becoming provider; consult legal

Phase 3: Governance Implementation

Step 3.1: Audit Trail Setup

Every decision the AI makes must be logged with context.

  • Audit trail system selected & deployed
  • Captures: Decision / Timestamp / User / Authorization / Output
  • Logs are immutable (cannot be deleted/modified retroactively)
  • Logs are cryptographically signed (tamper-proof)
  • Logs retained for minimum 3 years
  • Access control implemented (only scoped users can view)
  • All log access is itself logged
  • Regular backup of logs (offsite, immutable)

Step 3.2: Policy & Authorization Framework

Define what the AI system is scoped to do.

  • Policy document created (intended scope, decision limits)
  • Escalation thresholds defined (when to escalate to human)
  • Exception procedures documented
  • Signed by business owner & CISO
  • Policy is machine-readable (enforcement system can parse it)
  • Policy version control implemented
  • Policy cryptographically signed (hash included in every log entry)

Step 3.3: Enforcement System

Implement controls to prevent the AI from violating its policy.

  • Enforcement layer deployed
  • System decisions routed through enforcement layer
  • Enforcement layer intercepts decisions before execution
  • Can be monitored without enforcement (audit-only mode)
  • Easy rollback if enforcement causes problems
  • Enforcement rules tested (10+ violation scenarios)
  • All violations correctly blocked/flagged
  • Performance impact measured & acceptable

Step 3.4: Human Oversight Mechanism

  • Human review process defined (when review is required, by whom)
  • Review SLA specified (how quickly must review happen)
  • Review documentation requirements established
  • Appeal/override process defined
  • Reviewers trained on system scope, bias detection, escalation
  • Review system monitored (time to review, override rate, feedback captured)

Phase 4: Performance Monitoring & Bias Testing

Step 4.1: Baseline Performance Metrics

  • Accuracy: ____% (measured on validation set)
  • Precision: ____% (false positive rate)
  • Recall: ____% (false negative rate)
  • Latency: ____ ms
  • Uptime target: ____%
  • Initial performance measured and documented
  • Comparison to human baseline (if applicable)
  • Acceptable performance thresholds defined

Step 4.2: Bias & Fairness Testing

  • Protected characteristics identified (gender, race, age, disability, socioeconomic)
  • Test scenarios designed for each characteristic
  • Fairness metrics defined (e.g., Disparate Impact Ratio)
  • Initial bias testing completed (500+ samples with known traces)
  • Results disaggregated by protected group
  • Discrimination indicators measured
  • If bias found, mitigation strategy created
  • Quarterly bias testing scheduled
  • Production data monitored for group disparities

Step 4.3: Performance Drift Detection

  • Performance monitoring dashboard created
  • Tracks accuracy, precision, recall over time
  • Alerts if metrics drop below thresholds
  • Data drift detection implemented
  • Incident response process defined (detect → investigate → remediate)

Phase 5: Compliance & Documentation

Step 5.1: Technical Documentation Package

  • System description (what it does, how it works)
  • Training data documentation
  • Model architecture & performance metrics
  • Risk assessment (what can go wrong)
  • Mitigation measures documented
  • Testing & validation results
  • Performance monitoring process
  • Human oversight process
  • Risk management plan (harms, probability, severity, mitigation)

Step 5.2: User Notification & Transparency

  • Users notified of automated decision
  • Clear notice: "An AI system was used to make this decision"
  • Provided at decision point (not buried in T&Cs)
  • Explanation available on request (plain language)
  • Explanation includes key factors in decision

Step 5.3: Appeals & Rights Process

  • Clear steps to appeal a decision
  • Escalation to human review
  • Timeline for response: _____ days
  • Right to appeal documented
  • No penalty for appealing
  • All appeals logged
  • Appeal outcomes tracked

Phase 6: Regulatory Readiness

Pre-Inspection Checklist

  • Technical documentation complete & accessible
  • Risk assessment signed
  • Performance metrics & bias testing results current
  • Audit trail complete & sample entries provided
  • Query examples demonstrated (show decisions by date, violations, metrics)
  • Log integrity verified (tamper-evident)
  • Latest bias testing completed within 90 days
  • No evidence of discrimination (or mitigation documented)
  • Legal review completed
  • Compliance officer sign-off obtained

Incident Response Plan

  • Definition of "incident" (e.g., High-Risk decision outside policy)
  • Notification procedures (who to contact)
  • Investigation process
  • Remediation process (how to fix affected decisions)
  • Regulator notification process (if required)
  • Key contacts identified & escalation path defined

Phase 7: Ongoing Compliance & Monitoring

Monthly Monitoring

  • Performance metrics reviewed
  • Alert logs reviewed (concerning patterns?)
  • Audit trail integrity verified
  • Issues reported to leadership

Quarterly Assessment

  • Bias testing completed
  • Performance drift analysis
  • Policy violations reviewed
  • Appeals/override patterns analyzed
  • Compliance status reported to leadership

Annual Comprehensive Review

  • Full bias testing (all protected characteristics)
  • Performance benchmarking
  • Documentation updated
  • Risk assessment revisited
  • Classification re-validated (still High-Risk?)
  • Legal/compliance sign-off renewed

Phase 8: Continuous Improvement

  • Document governance experience (what worked, what was hard)
  • Create playbook/runbook for other High-Risk systems
  • Train other system owners on compliance process
  • Share best practices across organization

↳ Ready to Implement?

Kyde automates the audit trail, policy enforcement, and compliance monitoring across all three phases: detection, classification, and governance.

Back to Trilogy Overview →