Classification is the foundation of AI governance. Get this wrong, and your compliance strategy fails.
The EU AI Act creates four categories of AI systems, each with different compliance obligations. By August 2026, enforcement begins for High-Risk systems in specific verticals (HR, Finance, Insurance, Healthcare).
The Four Categories
Every AI system in your organization falls into one of these:
| Category | Compliance Burden | Fines |
|---|---|---|
| Prohibited | Ban immediately | €30M or 6% revenue |
| High-Risk | Full governance + audit trail | €30M or 7% revenue |
| General-Purpose | Limited transparency | €15M or 4% revenue |
| Low-Risk | Standard data protection | GDPR-level |
Step 1: Is Your System Prohibited?
If ANY of these are true, your system is prohibited and must be banned immediately:
Prohibited Use Case #1: Real-Time Facial Recognition
Question: Does your AI system perform real-time remote biometric identification (facial recognition, iris recognition, gait analysis) in public spaces or for law enforcement?
If YES → System is prohibited. Stop using it.
Prohibited Use Case #2: Emotion Recognition for Law Enforcement
Question: Does your system recognize emotions or mental states from biometric data (facial expressions, voice, heart rate) for law enforcement, border control, immigration, or workplace monitoring?
If YES without explicit legal authorization → System is prohibited.
Prohibited Use Case #3: Social Scoring
Question: Does your system assign social credit scores or create comprehensive profiles for automated discrimination, exclusion, or punitive treatment?
If YES → System is prohibited.
If all three answers are NO: Continue to Step 2. Your system is not prohibited.
Step 2: Is Your System High-Risk?
High-Risk systems require full governance: conformity assessment, technical documentation, risk management, audit trails, bias testing, and human oversight.
The 37 High-Risk Categories (Annex III)
The EU AI Act lists 37 specific High-Risk categories across these domains:
- Biometric & Identification: Facial recognition for access control, age/gender estimation
- Critical Infrastructure: AI controlling power grids, water systems, transportation
- Public Services: Determining eligibility for benefits, housing, permits
- Employment & Education: Resume screening, hiring decisions, grade assessment, student admissions
- Finance & Insurance: Credit decisions, underwriting, pricing, claims assessment
- Law Enforcement & Justice: Predictive policing, bail/sentencing recommendations, crime detection
- Healthcare & Welfare: Medical diagnosis, treatment recommendations, mental health assessment
How to Determine if YOUR System is High-Risk
Step 1: Identify the domain — Look at the list above. Does your AI system fit any of these categories?
Step 2: Assess impact — Even if your system fits a category, it may not be High-Risk if decisions can be easily overridden or if there's strong human authority above it.
Step 3: Document the classification — Create a formal record.
Classification Decision Tree
┌─ Does system do real-time facial recognition in public?
│ └─ YES → PROHIBITED
├─ Does system recognize emotions/mental states for law enforcement?
│ └─ YES → PROHIBITED
├─ Does system create social credit scores?
│ └─ YES → PROHIBITED
├─ Does system make automated decisions affecting individual rights?
│ │ (hiring, credit, insurance, benefits, education, justice, healthcare)
│ ├─ YES → Check Annex III
│ │ ├─ Matches category → HIGH-RISK
│ │ └─ No match → GENERAL-PURPOSE
│ └─ NO → Continue
└─ Is system a general-purpose LLM (ChatGPT, Claude)?
├─ YES → GENERAL-PURPOSE
└─ NO → LOW-RISK or GENERAL-PURPOSE Common Classification Mistakes
Mistake #1: "It's a General-Purpose LLM, So No Obligations"
Wrong. How you use a general-purpose LLM determines its risk classification.
- ChatGPT for customer service = general-purpose ✓
- ChatGPT for hiring decisions = HIGH-RISK ✗
Mistake #2: "If Humans Review It, It's Not Automated"
Wrong. The EU AI Act applies even with human review. High-Risk systems must still have audit trails, bias testing, and performance documentation.
Mistake #3: "We'll Reclassify If Enforcement Comes"
Too late. Fines apply retroactively to all deployments made without proper governance. Classification must happen before deployment.
Obligations by Classification
If You're the Deployer (You Use the AI System)
| High-Risk | General-Purpose | Low-Risk |
|---|---|---|
| Conformity assessment | Transparency marker | Standard data protection |
| Risk management plan | Copyright compliance | |
| Audit trail | Abuse prevention | |
| Performance monitoring | ||
| Bias testing | ||
| Human oversight |
The Substantial Modification Risk (Article 25)
Critical: Your governance solution might accidentally make you the provider instead of the deployer.
If your governance approach substantially modifies the AI system (changes how it behaves), you transition from Deployer → Provider status and inherit all provider obligations.
Examples of non-modification (safe):
- Logging decisions (monitoring, not changing)
- Blocking calls (preventing execution, not changing behavior)
- Routing decisions (directing traffic, not changing behavior)
This is why proxy-based governance is legally safer — it logs and blocks without modifying system behavior, keeping you as the deployer.
↳ Classified? Now Govern It.
You've determined your system's risk level. Now implement the governance controls and compliance obligations.