Key Takeaway
A dedicated AI risk register surfaces risks that generic enterprise risk registers miss -- training data bias, model drift, prompt injection, and vendor model deprecation. This template provides pre-populated risk categories with AI-calibrated scoring rubrics, mitigation tracking workflows, and integration points with your existing enterprise risk management process.
Why Generic Risk Registers Fail for AI
Enterprise risk registers are designed around categories like financial risk, operational risk, compliance risk, and reputational risk. AI risks cut across all of these categories in ways that generic registers do not accommodate. Model drift is an operational risk with compliance implications. Training data bias is a technical risk with legal, reputational, and financial consequences. Prompt injection is a security risk that can manifest as a data privacy violation. When AI risks are forced into generic categories, they get split across multiple register entries, lose their AI-specific context, and become difficult to assess because the scoring rubric was not calibrated for probabilistic system failures.
Risk Scoring Rubric
The scoring rubric uses a five-point scale for both likelihood and impact, with descriptors calibrated to AI system behavior rather than generic business risk. The likelihood scale accounts for the probabilistic nature of AI failures (model degradation is not a question of if but when), and the impact scale considers both immediate effects and downstream consequences.
| Score | Likelihood | Impact |
|---|---|---|
| 1 - Rare | Has not occurred in similar systems; requires highly unusual conditions; probability < 5% annually | Negligible impact; auto-recoverable; no user-visible effect |
| 2 - Unlikely | Has occurred in similar systems but is infrequent; probability 5-15% annually | Minor degradation; affects small user segment; no regulatory implication |
| 3 - Possible | Expected to occur at some point; probability 15-40% annually; common in the industry | Moderate impact; noticeable quality degradation; may trigger internal investigation |
| 4 - Likely | Expected to occur within the next year; probability 40-70% annually; multiple industry precedents | Significant impact; affects substantial user base; potential regulatory inquiry; remediation cost material |
| 5 - Almost Certain | Expected to occur within months; probability > 70% annually; inherent to the system design | Severe impact; broad user harm; regulatory enforcement action; material financial or reputational damage |
Pre-Populated Risk Categories
The following risk categories are pre-populated with the most common AI-specific risks. Use these as a starting point and add organization-specific risks based on your AI system portfolio, industry, and regulatory environment. Each risk entry should be assigned an owner and reviewed on a defined cadence.
Technical Risks
Technical risks arise from the inherent characteristics of ML systems: models degrade over time as data distributions shift, training can introduce subtle biases that are difficult to detect, and system complexity makes failure modes hard to predict. The most important technical risk for most organizations is model drift -- the gradual degradation of model accuracy that occurs as the real world changes relative to the training data.
Data Risks
Data risks encompass training data quality issues, data poisoning attacks, privacy violations through model memorization, and data pipeline failures that corrupt features. Data risks are particularly dangerous because they can be invisible: a biased training dataset produces a biased model that generates biased outputs, and without proactive fairness testing, the bias may never be detected until it causes harm.
Security Risks
AI-specific security risks include prompt injection (manipulating model behavior through crafted inputs), model extraction (using API access to reconstruct proprietary models), adversarial attacks (inputs designed to cause misclassification), and training data extraction (recovering training data from model outputs). These risks require defenses beyond traditional application security.
/**
* AI Risk Register type definitions.
* Use these types to structure your risk register
* entries in a machine-readable format.
*/
type RiskCategory =
| "technical"
| "data"
| "security"
| "operational"
| "compliance";
type RiskStatus =
| "identified"
| "assessed"
| "mitigating"
| "accepted"
| "closed";
interface RiskEntry {
id: string; // e.g., "AI-RISK-2026-001"
category: RiskCategory;
title: string;
description: string;
affectedSystems: string[]; // Model IDs or system names
// Scoring
likelihoodScore: 1 | 2 | 3 | 4 | 5;
impactScore: 1 | 2 | 3 | 4 | 5;
inherentRisk: number; // likelihood * impact
residualRisk: number; // after controls applied
// Controls
currentControls: string[];
controlEffectiveness: "effective" | "partially-effective" | "ineffective";
// Mitigation
mitigationPlan: string;
mitigationStatus: RiskStatus;
mitigationDeadline: string; // ISO date
// Ownership
riskOwner: string;
reviewFrequency: "monthly" | "quarterly" | "annually";
lastReviewDate: string;
nextReviewDate: string;
// Metadata
dateIdentified: string;
lastUpdated: string;
relatedRegulations: string[]; // e.g., ["EU AI Act Art 9", "GDPR Art 35"]
}
function calculateRiskLevel(
likelihood: number,
impact: number,
): "low" | "medium" | "high" | "critical" {
const score = likelihood * impact;
if (score >= 20) return "critical";
if (score >= 12) return "high";
if (score >= 6) return "medium";
return "low";
}Risk Review Workflow
Risk register entries are living documents. Each entry must be reviewed on a defined cadence based on its risk level: critical and high risks monthly, medium risks quarterly, and low risks annually. The review process assesses whether the likelihood or impact has changed, whether current controls remain effective, whether the mitigation plan is on track, and whether new information has emerged that affects the risk assessment.
- 1
Step 1: Risk Identification
Identify new AI-specific risks through system reviews, incident analysis, threat modeling, regulatory updates, and team retrospectives. Create a register entry with initial likelihood and impact scores.
- 2
Step 2: Risk Assessment
Evaluate the risk with input from engineering, product, legal, and compliance. Score likelihood and impact using the rubric. Identify current controls and assess their effectiveness.
- 3
Step 3: Mitigation Planning
Develop a mitigation plan for risks above the acceptable threshold. Assign an owner, set a deadline, and define success criteria. For accepted risks, document the rationale for acceptance.
- 4
Step 4: Regular Review
Review risk entries on the defined cadence. Update scores based on new information, control changes, or environmental shifts. Escalate risks that have increased in severity.
Keep the risk register as a living artifact that is reviewed during sprint planning and architecture reviews, not as a document that is updated only during quarterly compliance reviews. When a team is building a new AI feature, the risk register should be consulted to identify applicable risks and ensure mitigations are built into the design.
Register Setup
Process Integration
Version History
1.0.0 · 2026-03-01
- • Initial release with AI-calibrated risk scoring rubric
- • Pre-populated risk categories covering technical, data, security, operational, and compliance
- • TypeScript type definitions for machine-readable risk register entries
- • Four-step risk review workflow with cadence guidance
- • Production checklist for register setup and process integration