Key Takeaway
A unified compliance matrix reduces duplicate engineering effort by identifying shared controls across regulations. By mapping twenty-four controls against six regulatory frameworks, teams can implement a common baseline that satisfies the majority of requirements across all jurisdictions simultaneously, then layer on regulation-specific additions where needed.
Prerequisites
- Familiarity with your organization's AI system inventory and risk classifications
- Understanding of your data processing activities and data flow diagrams
- Access to legal counsel with AI regulation expertise (EU AI Act, CCPA/CPRA, HIPAA)
- Working knowledge of ISO 27001 or SOC 2 control frameworks
- An existing or planned AI governance structure with defined roles (see: AI Governance Framework)
- Basic understanding of ML model lifecycle: training, validation, deployment, monitoring
The Compliance Landscape
AI compliance is not a single regulation you can read and implement. It is a web of overlapping, sometimes contradictory requirements that span jurisdictions, industries, and system types. An AI system that processes health data for EU residents must simultaneously satisfy the EU AI Act's risk-based requirements, GDPR's data protection obligations, HIPAA's protected health information rules (if touching US health data), and potentially SOC 2 trust service criteria demanded by enterprise customers. Each regulation was written by a different body, with different terminology, different enforcement mechanisms, and different timelines.
The practical problem for engineering teams is that reading each regulation independently leads to redundant implementation work. A data lineage system built to satisfy GDPR Article 30's records-of-processing requirement also satisfies most of the EU AI Act's data governance obligations under Article 10, and contributes to SOC 2's processing integrity criteria. But without a cross-regulation view, teams often build three separate systems. This matrix exists to prevent that waste.
This guide covers six regulatory frameworks that collectively represent the compliance surface area most AI-deploying organizations face. Not every framework applies to every organization. A US-only healthcare startup has a different profile than a multinational financial services firm. The matrix is designed to be filtered: identify which regulations apply to your organization, then focus on the controls that are required or recommended for that subset.
This matrix is a technical implementation guide, not legal advice. Regulatory interpretation varies by jurisdiction, industry, and use case. Always validate your compliance approach with qualified legal counsel before treating any control as sufficient for regulatory compliance.
Regulations Covered
Each regulation below is summarized with its AI-specific implications. The collapsible sections provide the detail you need to understand why each control in the matrix is classified as required, recommended, or not applicable for that regulation.
The EU AI Act (Regulation 2024/1689) is the world's first comprehensive AI-specific regulation. It establishes a risk-based classification system with four tiers: unacceptable risk (banned), high-risk (heavy obligations), limited risk (transparency obligations), and minimal risk (voluntary codes of conduct). The Act entered into force on August 1, 2024, with a phased compliance timeline: prohibited practices apply from February 2025, obligations for general-purpose AI models from August 2025, and high-risk system requirements from August 2026.
For engineering teams, the high-risk tier creates the most implementation work. Article 9 requires a documented risk management system that is iterative and updated throughout the AI system lifecycle. Article 10 mandates data governance practices including examination of training data for biases, relevance, and representativeness. Article 11 requires technical documentation sufficient for authorities to assess compliance. Article 13 demands transparency measures so deployers understand the system's capabilities and limitations. Article 14 requires human oversight mechanisms that allow human operators to understand, monitor, and override the system. Article 15 mandates accuracy, robustness, and cybersecurity requirements appropriate to the system's intended purpose.
General-purpose AI model providers face obligations under Article 53: maintaining technical documentation, providing information to downstream providers, establishing a copyright compliance policy, and publishing a training content summary. Models with systemic risk (above 10^25 FLOP training threshold) face additional obligations under Article 55 including adversarial testing, incident tracking, and energy consumption reporting.
The California Consumer Privacy Act (CCPA), as amended by the California Privacy Rights Act (CPRA), grants consumers rights over their personal information and imposes obligations on businesses processing it. While not AI-specific, several provisions directly affect AI systems. Section 1798.100 establishes the right to know what personal information is collected and how it is used, which extends to AI training data and inference inputs. Section 1798.105 creates deletion rights that complicate model retraining when training data must be erasable.
Most critically for AI, Section 1798.185(a)(16) directed the California Privacy Protection Agency to issue regulations governing automated decision-making technology (ADMT). These ADMT regulations, currently in rulemaking, would require businesses to provide consumers with access to information about the logic of automated decisions, the right to opt out of ADMT in certain contexts, and pre-use notices for profiling decisions with significant effects. Engineering teams should design AI systems with opt-out mechanisms and decision explanation capabilities now, even before final ADMT rules are published.
The Health Insurance Portability and Accountability Act (HIPAA) predates modern AI but its requirements for protected health information (PHI) create significant constraints on AI systems in healthcare. The Privacy Rule (45 CFR Part 164, Subpart E) requires minimum necessary use of PHI, which means AI systems should only receive the PHI fields actually needed for the task. The Security Rule (45 CFR Part 164, Subpart C) mandates technical safeguards including access controls, audit controls, integrity controls, and transmission security for electronic PHI (ePHI) processed by AI systems.
AI-specific HIPAA concerns include: model memorization of PHI in training data, which can lead to PHI exposure through inference-time attacks; the use of PHI for model training without proper authorization or de-identification under the Safe Harbor or Expert Determination methods (45 CFR 164.514); business associate agreement (BAA) requirements when third-party AI services process PHI; and the breach notification obligations under the Breach Notification Rule (45 CFR Part 164, Subpart D) when AI system vulnerabilities lead to unauthorized PHI disclosure. The HHS Office for Civil Rights has signaled increased scrutiny of AI systems handling PHI.
SOC 2 (System and Organization Controls 2) is an audit framework based on the AICPA Trust Service Criteria. While not a regulation, SOC 2 compliance is effectively required for B2B AI service providers because enterprise customers demand it. The five trust service categories — Security, Availability, Processing Integrity, Confidentiality, and Privacy — each have AI-specific implications that extend beyond traditional software controls.
Processing Integrity (PI1.1 through PI1.5) is particularly relevant for AI: you must demonstrate that system processing is complete, valid, accurate, timely, and authorized. For AI systems, this means documenting model accuracy metrics, establishing validation procedures for model outputs, and maintaining evidence that the system performs as described. Security (CC6.1 through CC6.8) requires logical and physical access controls that extend to model artifacts, training data, and inference endpoints. Confidentiality (C1.1, C1.2) requires protecting confidential information throughout the AI pipeline, including training data, model weights, and inference inputs/outputs.
ISO/IEC 42001:2023 is the first international management system standard for artificial intelligence. It follows the Annex SL high-level structure shared by ISO 27001 and ISO 9001, making it integrable with existing management systems. The standard requires organizations to establish an AI management system (AIMS) covering the planning, development, deployment, and monitoring of AI systems.
Key clauses include: Clause 4 (Context) requiring organizations to identify interested parties and their AI-specific requirements; Clause 6 (Planning) mandating AI risk assessment and treatment processes; Clause 7 (Support) requiring AI-specific competence, awareness, and communication; Clause 8 (Operation) covering AI system lifecycle processes including data management, model development, and deployment; and Clause 9 (Performance Evaluation) requiring monitoring, measurement, analysis, and internal audit of AI systems. Annex A provides a reference set of AI-specific controls covering responsible AI, data management, system development, and third-party relationships. ISO 42001 certification is becoming a market differentiator for AI service providers.
The NIST AI Risk Management Framework (AI RMF 1.0, published January 2023) provides a voluntary framework for managing AI risks. Unlike the EU AI Act, it is not legally binding, but it is increasingly referenced in US federal procurement requirements and serves as a de facto standard for AI risk management practices. The framework is organized around four core functions: Govern, Map, Measure, and Manage.
Govern (GV) establishes the organizational context, policies, and processes for AI risk management. Map (MP) identifies and contextualizes AI risks, including risks from third-party components. Measure (MS) employs quantitative and qualitative methods to analyze, assess, and track AI risks. Manage (MN) prioritizes and acts on risks through mitigation, transfer, or acceptance. Each function contains categories and subcategories (e.g., GV-1.1, MP-2.3) that map to specific practices. The companion NIST AI RMF Playbook provides implementation suggestions for each subcategory. While voluntary, implementing the NIST AI RMF demonstrates due diligence and can support compliance arguments for other regulations.
The Master Compliance Matrix
The matrix below maps twenty-four controls across six categories to each of the six regulatory frameworks. Each cell indicates whether the control is required (the regulation explicitly mandates it), recommended (the regulation supports or implies it, or it constitutes best practice for compliance), or not applicable (the regulation does not address this area). Use this matrix to identify your baseline: controls that are required across all your applicable regulations should be implemented first.
Control Implementation Guide
The following sections provide implementation details for ten high-priority controls. Each includes what the control requires, how to implement it technically, and what evidence artifacts you need for audit purposes.
DG-01: Training Data Inventory
A training data inventory is the foundation of AI compliance. Without knowing what data your models were trained on, you cannot answer questions about consent, bias, or data rights. The inventory must be machine-readable, versioned, and linked to your model registry so that for any deployed model you can trace back to the exact datasets used in training. EU AI Act Article 10(2) requires documentation of data provenance, preparation design choices, and data collection processes.
interface DatasetRecord {
id: string;
name: string;
version: string;
source: string;
collectionDate: string;
consentBasis: "explicit" | "legitimate-interest" | "contract" | "legal-obligation" | "public-interest";
containsPII: boolean;
piiCategories?: string[];
demographicCoverage: Record<string, number>;
knownLimitations: string[];
dataSubjectCount: number;
retentionPolicy: string;
lastAuditDate: string;
}
interface TrainingDataInventory {
modelId: string;
modelVersion: string;
datasets: DatasetRecord[];
preprocessingSteps: {
step: string;
description: string;
dataImpact: string;
}[];
dataQualityScore: number;
lastUpdated: string;
}
// Evidence artifacts: inventory JSON per model version,
// data source agreements, consent records, preprocessing logsDG-03: Bias Detection in Training Data
Bias detection requires systematic analysis of training data across protected characteristics. This is not a one-time check but a recurring process that runs on every dataset version. EU AI Act Article 10(2)(f) requires examination for possible biases that are likely to affect the health and safety of persons or lead to discrimination. Implement statistical parity checks, representation analysis, and proxy variable detection.
from dataclasses import dataclass
from typing import Dict, List
@dataclass
class BiasReport:
dataset_id: str
dataset_version: str
analysis_date: str
demographic_parity: Dict[str, float]
representation_ratios: Dict[str, float]
proxy_variables: List[str]
disparate_impact_ratio: float
findings: List[str]
remediation_actions: List[str]
def analyze_demographic_parity(
dataset,
protected_attributes: List[str],
target_column: str,
threshold: float = 0.8,
) -> Dict[str, float]:
"""
Calculate demographic parity ratio for each
protected attribute. Ratio below threshold
indicates potential bias requiring remediation.
"""
results = {}
for attr in protected_attributes:
groups = dataset.groupby(attr)[target_column].mean()
min_rate = groups.min()
max_rate = groups.max()
results[attr] = min_rate / max_rate if max_rate > 0 else 0.0
return results
# Evidence: bias reports per dataset version,
# remediation logs, before/after comparisonsMG-02: Model Documentation
Model documentation must satisfy EU AI Act Annex IV, which specifies required content including: a general description of the AI system, a detailed description of development methodology, the design specifications of the system, a description of the monitoring and updating processes, and the validation and testing procedures used. Automate documentation generation from your ML pipeline metadata to prevent documentation from drifting from reality.
# Model Card Template — aligned with EU AI Act Annex IV
model:
name: ""
version: ""
type: "" # classification, regression, generation, etc.
intended_purpose: ""
intended_users: ""
out_of_scope_uses: []
development:
training_methodology: ""
architecture: ""
framework: ""
hardware_used: ""
training_duration: ""
hyperparameters: {}
data:
training_datasets: [] # references to data inventory
validation_dataset: ""
test_dataset: ""
data_preprocessing: []
evaluation:
metrics:
- name: ""
value: 0.0
dataset: ""
fairness_metrics:
- name: ""
value: 0.0
demographic_group: ""
robustness_tests: []
failure_modes: []
limitations:
known_limitations: []
recommendations_for_use: []
out_of_distribution_behavior: ""
oversight:
human_oversight_level: "" # HITL, HOTL, HIC
override_mechanism: ""
monitoring_plan: ""
update_cadence: ""
lifecycle:
deployment_date: ""
review_date: ""
retirement_criteria: []
version_history: []TR-03: Audit Logging
AI audit logs must capture more than traditional application logs. EU AI Act Article 12(1) requires that high-risk AI systems be designed to automatically record events (logs) over their lifetime. These logs must enable tracing of the system's operation through its lifecycle. For each inference, record the model version, input features (or a secure hash of them), output, confidence scores, any human override, and the decision timestamp. Logs must be immutable and retained for the period appropriate to the system's intended purpose.
interface AIAuditLogEntry {
// Identity
traceId: string;
timestamp: string; // ISO 8601
systemId: string;
modelId: string;
modelVersion: string;
// Input/Output
inputHash: string; // SHA-256 of input (don't log raw PII)
inputSchema: string;
outputHash: string;
outputSummary: string; // Non-PII summary
confidenceScore: number;
// Decision context
decisionType: string;
humanOverride: boolean;
overrideReason?: string;
overrideBy?: string;
// Metadata
latencyMs: number;
tokenCount?: number;
featureFlags: Record<string, boolean>;
regulatoryContext: string[]; // Which regulations apply
}
// Log to append-only store (e.g., immutable ledger,
// write-once cloud storage, or tamper-evident logging service).
// Retention: minimum 5 years for EU AI Act high-risk systems,
// 6 years for HIPAA, per your longest applicable requirement.AC-01: Human Oversight Mechanism
Human oversight is one of the most architecturally significant compliance requirements. EU AI Act Article 14 requires that high-risk AI systems be designed to be effectively overseen by natural persons, including the ability to fully understand the system's capacities and limitations, to properly monitor its operation, and to decide not to use the system or to disregard, override, or reverse the output. This means building kill switches, confidence thresholds that trigger human review, and queue-based workflows for high-stakes decisions.
type OversightLevel = "HITL" | "HOTL" | "HIC";
// HITL: Human-in-the-loop (human approves every decision)
// HOTL: Human-on-the-loop (human monitors, intervenes on exceptions)
// HIC: Human-in-command (human sets parameters, system operates)
interface OversightConfig {
systemId: string;
level: OversightLevel;
confidenceThreshold: number; // Below this, route to human
escalationRules: EscalationRule[];
killSwitch: {
enabled: boolean;
authorizedRoles: string[];
notificationChannels: string[];
};
reviewQueue: {
maxPendingDecisions: number;
slaMinutes: number;
fallbackAction: "block" | "default-safe" | "escalate";
};
}
interface EscalationRule {
condition: string; // e.g., "confidence < 0.7"
action: "queue" | "block" | "notify";
assignTo: string; // role or team
slaMinutes: number;
}
// Evidence: oversight configuration per system,
// human review completion records, override logs,
// kill switch activation historyPV-04: PII Detection and Masking
PII detection must be applied at multiple points in the AI pipeline: when data enters the training pipeline, when data is sent to inference endpoints, and when model outputs are returned to users. This is especially critical for generative AI systems that may reproduce training data containing PII. HIPAA de-identification under 45 CFR 164.514(b) Safe Harbor requires removal of eighteen specific identifier categories. CCPA Section 1798.140(v) defines personal information broadly, including inferences drawn from other data.
from dataclasses import dataclass, field
from enum import Enum
from typing import List
class PIICategory(Enum):
NAME = "name"
EMAIL = "email"
PHONE = "phone"
SSN = "ssn"
ADDRESS = "address"
DOB = "date_of_birth"
MEDICAL_RECORD = "medical_record_number"
HEALTH_PLAN = "health_plan_beneficiary"
ACCOUNT = "account_number"
BIOMETRIC = "biometric_identifier"
IP_ADDRESS = "ip_address"
DEVICE_ID = "device_identifier"
@dataclass
class PIIDetection:
category: PIICategory
start: int
end: int
confidence: float
text_snippet: str # masked version for logging
@dataclass
class ScanResult:
document_id: str
scan_timestamp: str
detections: List[PIIDetection] = field(default_factory=list)
pii_found: bool = False
categories_found: List[str] = field(default_factory=list)
def requires_remediation(self) -> bool:
return any(
d.confidence > 0.85 for d in self.detections
)
# Run PII scanning at three pipeline stages:
# 1. Data ingestion (before data enters training pipeline)
# 2. Inference input (before user data reaches the model)
# 3. Inference output (before model response reaches user)SC-02: Adversarial Robustness Testing
EU AI Act Article 15(4) requires that high-risk AI systems be resilient against attempts by unauthorized third parties to alter their use, outputs, or performance by exploiting system vulnerabilities. For LLM-based systems, this primarily means testing for prompt injection attacks (both direct and indirect), jailbreak attempts, and data extraction attacks. For traditional ML models, test for evasion attacks, model inversion, and membership inference. Document all test results and maintain a vulnerability register.
interface AdversarialTestCase {
id: string;
category: "prompt-injection" | "jailbreak" | "data-extraction"
| "evasion" | "model-inversion" | "membership-inference";
severity: "critical" | "high" | "medium" | "low";
description: string;
input: string;
expectedBehavior: string;
actualBehavior?: string;
passed?: boolean;
}
interface AdversarialTestReport {
systemId: string;
modelVersion: string;
testDate: string;
testSuiteVersion: string;
totalTests: number;
passed: number;
failed: number;
criticalFailures: number;
testCases: AdversarialTestCase[];
remediationPlan: {
finding: string;
action: string;
owner: string;
deadline: string;
}[];
}
// Run adversarial testing:
// - Before every production deployment
// - After model updates or fine-tuning
// - After changes to input/output filtering
// - On a recurring schedule (at least quarterly)MG-01: Model Risk Assessment
Model risk assessment should be performed before development begins and updated throughout the lifecycle. EU AI Act Article 9 requires a risk management system that identifies and analyses known and reasonably foreseeable risks, estimates and evaluates the risks that may emerge during intended use and foreseeable misuse, and evaluates other possible arising risks based on post-market monitoring data. The NIST AI RMF Map function provides a structured approach to identifying and contextualizing risks.
interface RiskAssessment {
systemId: string;
assessmentDate: string;
assessor: string;
riskClassification: "unacceptable" | "high" | "limited" | "minimal";
intendedPurpose: string;
affectedPopulations: string[];
deploymentContext: string;
risks: {
id: string;
description: string;
category: "safety" | "rights" | "discrimination" | "privacy"
| "security" | "operational";
likelihood: 1 | 2 | 3 | 4 | 5;
impact: 1 | 2 | 3 | 4 | 5;
inherentRiskScore: number;
mitigations: string[];
residualRiskScore: number;
acceptanceDecision: "accept" | "mitigate" | "transfer" | "avoid";
owner: string;
reviewDate: string;
}[];
approvalChain: {
role: string;
name: string;
decision: "approved" | "rejected" | "conditional";
conditions?: string[];
date: string;
}[];
}
// Risk score = likelihood * impact
// Scores 15+ require executive approval
// Scores 20+ require board-level reviewAC-03: Ongoing Monitoring and Drift Detection
Post-deployment monitoring is where many compliance programs fail. Models degrade silently as data distributions shift, and by the time accuracy drops enough for users to notice, the regulatory exposure may be significant. EU AI Act Article 72 requires deployers of high-risk AI systems to monitor operation on the basis of the instructions of use and to inform providers when they identify serious incidents. Implement statistical monitoring of input distributions, output distributions, and performance metrics against defined thresholds.
interface DriftMonitorConfig {
systemId: string;
modelVersion: string;
metrics: {
name: string;
type: "accuracy" | "latency" | "distribution" | "fairness";
threshold: number;
direction: "above" | "below"; // alert when crossing
windowSize: string; // e.g., "1h", "24h", "7d"
}[];
alertChannels: string[];
escalationPolicy: {
warningThreshold: number; // e.g., 0.90 of baseline
criticalThreshold: number; // e.g., 0.80 of baseline
autoRollback: boolean;
rollbackTarget: string; // previous model version
};
}
// Monitor these signals continuously:
// 1. Input distribution shift (KL divergence, PSI)
// 2. Output distribution shift (prediction distribution)
// 3. Performance metrics (accuracy, precision, recall, F1)
// 4. Fairness metrics (demographic parity ratio per group)
// 5. Latency percentiles (p50, p95, p99)
// 6. Error rates and error type distributionAudit Preparation
Whether you are preparing for a SOC 2 audit, ISO 42001 certification, or a regulatory inquiry under the EU AI Act, the audit preparation process follows a similar pattern. The steps below provide a repeatable procedure for getting audit-ready.
- 1
Inventory all AI systems
Create a comprehensive register of every AI system in production, development, and procurement. Include system name, risk classification, owner, deployment date, applicable regulations, and current compliance status. This inventory is the scope boundary for your audit.
- 2
Map controls to evidence
For each applicable control in the compliance matrix, identify the evidence artifact that demonstrates compliance. Evidence may include configuration files, automated test reports, policy documents, training records, or system logs. Document where each artifact is stored and who is responsible for maintaining it.
- 3
Run gap analysis
Compare your current control implementation against the required controls for your applicable regulations. Score each control as fully implemented, partially implemented, or not implemented. Prioritize gaps by regulatory risk (required controls first) and implementation effort.
- 4
Remediate critical gaps
Address fully missing required controls first. For partially implemented controls, document the gap and create a remediation plan with timelines. Some gaps may be addressable through compensating controls or risk acceptance with appropriate approval.
- 5
Collect and organize evidence
Gather all evidence artifacts into a structured evidence library organized by control category. Each artifact should include metadata: control ID, regulation reference, collection date, validity period, and responsible party. Automate evidence collection where possible to reduce manual effort in future audits.
- 6
Conduct internal audit
Run through the audit procedure internally before the external audit. Have someone unfamiliar with the systems review the evidence against the control requirements. Document findings and remediate any issues discovered. This dry run surfaces gaps that are invisible to the team that implemented the controls.
- 7
Prepare audit narratives
Write clear descriptions of how each control is implemented, including the technical mechanisms, responsible roles, and monitoring procedures. Auditors need to understand not just what you do, but why and how you verify it is working. Avoid jargon and provide system diagrams where helpful.
- 8
Establish continuous compliance
After the audit, transition from point-in-time evidence collection to continuous compliance monitoring. Automate evidence generation, set up alerts for control failures, and establish a regular review cadence. The goal is to be audit-ready at all times rather than scrambling before each audit cycle.
Evidence Collection Framework
Each control in the compliance matrix requires specific evidence artifacts to demonstrate compliance during audits. The table below maps control categories to the types of evidence typically required, the collection frequency, and the storage requirements.
| Control Category | Evidence Types | Collection Frequency | Retention Period |
|---|---|---|---|
| Data Governance | Dataset inventories, data flow diagrams, consent records, bias reports, data quality scorecards | Per dataset version + quarterly review | Duration of model deployment + 5 years |
| Model Governance | Model cards, risk assessments, validation reports, version history, approval records | Per model version + annual review | Duration of model deployment + 5 years |
| Security | Access control logs, penetration test reports, adversarial test results, vulnerability assessments | Continuous logging + quarterly testing | Minimum 3 years (6 years for HIPAA) |
| Privacy | PII scan reports, consent management logs, deletion request records, DPIA/PIA documents | Continuous scanning + per deletion request | Duration of data processing + 5 years |
| Transparency | Audit logs, disclosure notices, explainability outputs, model cards, user-facing documentation | Continuous logging + per model release | EU AI Act: appropriate to intended purpose; HIPAA: 6 years |
| Accountability | Human review records, incident reports, monitoring dashboards, audit findings, training certificates | Continuous monitoring + per incident + annual audit | Minimum 5 years (align with longest applicable regulation) |
Gap Analysis Self-Assessment
Use this checklist to perform a quick gap analysis of your current compliance posture. Check each item that you have fully implemented. Items left unchecked represent gaps that should be prioritized based on the compliance matrix requirements for your applicable regulations.
Implementation Priority Matrix
Not all controls require the same level of implementation effort, and not all deliver the same compliance impact. The matrix below categorizes the controls by effort and impact to help you prioritize your compliance roadmap. Start with high-impact, lower-effort controls to build a compliance baseline quickly, then tackle the higher-effort items.
6 controls
High Impact, Lower Effort
Audit logging, AI system disclosure, model documentation, data inventory, PII detection, incident reporting — implement these first for immediate compliance coverage
8 controls
High Impact, Higher Effort
Human oversight mechanisms, bias detection, model risk assessment, adversarial testing, drift monitoring, consent management, data lineage, decision explainability
6 controls
Moderate Impact, Lower Effort
Model version control, model cards, data quality assessment, periodic audits, data minimization review, right to deletion process
4 controls
Moderate Impact, Higher Effort
Inference endpoint security hardening, training pipeline security, model validation test suites, access control overhaul
Common Compliance Gaps
Based on common patterns observed in AI compliance programs, these are the areas where organizations most frequently fall short. Each gap below represents a finding that could result in audit failure or regulatory exposure.
Gap: Audit logs that capture application events but not AI-specific context. If your logs do not include the model version, confidence score, and input/output hashes for each AI decision, they are insufficient for EU AI Act Article 12 compliance. Standard application logging frameworks need to be extended with AI-specific fields.
Gap: Model documentation that exists at training time but is never updated. EU AI Act Article 11 and ISO 42001 Clause 7.5 require documentation to be kept up to date throughout the system lifecycle. Treat model cards as living documents with version control and review cadences, not as one-time artifacts.
Gap: Human oversight mechanisms that exist on paper but are not exercised. If your human review queue has a 48-hour SLA but reviewers consistently rubber-stamp decisions without meaningful review, the control is not effective. Track review time, override rates, and reviewer feedback to demonstrate genuine human oversight.
Gap: Bias testing performed only on the overall dataset without slice-based analysis. Aggregate fairness metrics can mask significant bias in demographic subgroups. EU AI Act Article 10(2)(f) requires examination for biases that may affect specific groups. Implement slice-based evaluation across all protected characteristics.
Gap: No process for propagating data deletion requests to trained models. When a CCPA deletion request arrives, deleting the record from your database is not sufficient if the model was trained on that data. You need either machine unlearning capabilities or a documented retraining process that excludes the deleted data.
Gap: Third-party AI services used without compliance assessment. When you use an external AI API, you inherit that provider's compliance posture. If the provider cannot demonstrate SOC 2 compliance, provide a BAA for HIPAA, or confirm EU AI Act compliance for their models, that gap transfers to your organization.
Downloadable Assets
The following templates and tools are available to help you implement the compliance matrix in your organization. Each asset is designed to be customized for your specific regulatory requirements and organizational structure.
ai-compliance-matrix-template.xlsx
XLSX · 245 KB
The full 24-control compliance matrix in spreadsheet format with filtering by regulation, control category, and implementation status. Includes a scoring sheet for gap analysis.
model-card-template.docx
DOCX · 78 KB
Model card template aligned with EU AI Act Annex IV documentation requirements. Pre-structured sections with guidance notes and example entries.
risk-assessment-template.xlsx
XLSX · 156 KB
AI risk assessment template with pre-populated risk categories, a 5x5 scoring matrix, mitigation tracking, and approval workflow columns.
audit-evidence-tracker.xlsx
XLSX · 198 KB
Evidence collection tracker organized by control ID. Tracks artifact name, location, collection date, responsible party, next collection date, and audit status.
Production Compliance Checklist
Before any AI system moves to production, verify that these compliance controls are in place. This checklist is organized by control category and should be completed by the engineering lead and reviewed by the compliance team.
Data Controls
Model Controls
Documentation
Monitoring
Version History
Version History
1.0.0 · 2026-03-01
- • Initial release with 24 controls across 6 regulatory frameworks
- • Implementation guides for 10 high-priority controls with code examples
- • Evidence collection framework with retention period guidance
- • Production compliance checklist with 21 items across 4 categories
- • Gap analysis self-assessment checklist
- • Downloadable templates for compliance matrix, model cards, risk assessments, and evidence tracking