New AI Governance Framework 2026 now available. View Resource →

v2026.1 — Updated March 2026

Enterprise AI Governance

Board-ready governance that moves at the speed of your AI ambitions. Decision rights, embedded controls, and an evidence trail your auditors will thank you for.

Decision Rights
Embedded Controls
Evidence Trail
What You Get

Governance Deliverables

Governance Charter

Council composition, decision authority, meeting cadence, and escalation paths—ready for executive sign-off.

Tiering Rubric

Risk-based classification (Tier 1–3) with clear criteria for data sensitivity, autonomy level, and regulatory exposure.

Intake Workflow

Standardized use-case submission, automatic tier assignment, and routing to the right approvers.

Evidence Pack Checklist

Model cards, data sheets, test plans, approval records, and monitoring plans—scoped by tier.

Board Dashboard Sample

Quarterly reporting template covering portfolio health, risk posture, value realization, and incidents.

Incident & Kill-Switch Playbook

Detection, escalation, remediation, and communication protocols for AI-related incidents.

47 days
Framework deployed
24
AI use cases classified
0
Compliance findings
“We went from no governance to board-defensible controls in under two months. The framework paid for itself in the first audit cycle.” — CTO, Series D Fintech (under NDA)
Governance Structure

Operating Model

A clear operating model separates strategic oversight (board / executives) from day-to-day controls (council / teams). Adapt titles to match your organization.

RoleResponsibilities
Board / Audit CommitteeOversight, risk appetite, accountability; receives quarterly AI risk and value reporting.
Executive SponsorSets priorities, resolves conflicts, ensures funding and cross-functional alignment.
AI Governance CouncilApproves high-impact use cases, policies, and tiering rules; tracks the portfolio.
Risk / Compliance / LegalDefines controls, reviews high-risk uses, ensures regulatory and contractual compliance.
Product / EngineeringBuilds and operates AI systems; maintains documentation, monitoring, and incident response.
Data GovernanceData quality, lineage, and access controls; ensures proper data use and retention.
Risk-Based Controls

Decision Rights by Tier

TIER 1
Low Impact
TIER 2
Medium Impact
TIER 3
High Impact
Approvers
Team lead
Governance council
Executive + legal/compliance
Artifacts
Use-case brief, basic model card
Full model card, data sheet, test plan
All artifacts + red-team results + legal review
Monitoring
Standard logging
Drift detection + quarterly review
Real-time monitoring + incident runbooks
Release Gate
Self-service deploy
Council sign-off required
Executive approval + staged rollout
Human Oversight
Optional review
Mandatory review before external output
Human-in-the-loop for all decisions + kill switch
Compliance Mapping

Regulatory Alignment Matrix

Our governance framework provides directional alignment with leading AI regulations and standards. This crosswalk shows how each governance component addresses specific regulatory expectations.

Governance Control NIST AI RMF EU AI Act SEC Guidance ISO/IEC 42001
Risk Tiering & Classification MAP 1.1, MAP 1.5 Art. 6 (Risk categories) Risk factor disclosure 6.1.2 (Risk assessment)
Governance Council & Decision Rights GOVERN 1.1, 1.2 Art. 9 (Risk management) Board oversight requirements 5.1 (Leadership)
Evidence Artifacts & Documentation MAP 3.4, MEASURE 2.6 Art. 11 (Technical documentation) Internal controls 7.5 (Documented information)
Monitoring & Drift Detection MANAGE 3.1, 4.1 Art. 72 (Post-market monitoring) Continuous disclosure 9.1 (Performance evaluation)
Incident Response & Kill Switch MANAGE 4.2, 4.3 Art. 62 (Serious incident reporting) Material event disclosure 10.2 (Nonconformity)
Human Oversight Requirements GOVERN 6.1 Art. 14 (Human oversight) Management accountability 8.1 (Operational planning)
Audit Trail & Accountability GOVERN 1.7 Art. 12 (Record-keeping) Books & records 9.2 (Internal audit)

Note: This mapping provides directional alignment. Specific compliance obligations depend on your jurisdiction, industry, and the nature of your AI systems. Regulatory landscape is evolving—we update mappings quarterly.

Request full compliance matrix (NDA required) →
Critical Distinction

Assistive vs Operative

Assistive

A human reviews, edits, and approves every AI output before it reaches a customer, system, or decision.

  • ✓ Draft generation, summarization, search
  • ✓ Internal analysis and recommendations
  • ✓ Human always in the loop

Operative

AI takes actions, makes decisions, or communicates with external parties without per-action human approval.

  • ⚠ Automated customer responses
  • ⚠ Autonomous workflows and triggers
  • ⚠ Requires Tier 2–3 controls minimum

Why this matters: The assistive/operative boundary determines your minimum control requirements. Misclassifying an operative system as assistive is one of the most common governance failures. For operative workflows, we recommend default-deny approval with explicit human override capability.

Embedded Controls

Lifecycle Control Gates

Governance is effective when embedded into delivery. These gates define where controls apply, what evidence is produced, and who approves.

Ideation
Intake + tier
1. Ideation Gate
Evidence: Use-case brief, tier classification
Approver: Council (Tier 2-3) or team lead (Tier 1)
SLA: 3-5 business days
Data
Access + quality
2. Data Readiness Gate
Evidence: Data sheet, access controls, quality metrics
Approver: Data governance + security
SLA: 5-10 business days
Build
Test + document
3. Build Gate
Evidence: Model card, test plan, bias testing
Approver: Engineering lead + QA
SLA: Sprint-aligned
Pre-Prod
Security + sign-off
4. Pre-Production Gate
Evidence: Security review, compliance check, red-team (Tier 3)
Approver: Risk/compliance + legal
SLA: 5-15 business days
Deploy
Approval + rollout
5. Deployment Gate
Evidence: Approval record, rollout plan, rollback plan
Approver: Council (Tier 2-3) or auto (Tier 1)
SLA: 1-3 business days
Monitor
Drift + incidents
6. Ongoing Monitoring
Evidence: Monitoring plan, drift alerts, incident logs
Approver: Ops team + quarterly council review
SLA: Continuous

Tap or click each gate to see evidence requirements, approvers, and typical SLA.

Evidence Artifacts (by Tier)

For Tier 2–3 systems, the following artifacts are recommended as part of the evidence pack. Tier 1 systems require a subset based on risk.

  • Model Card: Purpose, training data summary, limitations, intended users, and known risks.
  • Data Sheet: Sources, lineage, quality checks, retention rules, and access controls.
  • Test Plan: Accuracy, bias, robustness, security, and red-team results.
  • Approval Record: Sign-offs, tier classification, and required mitigations.
  • Monitoring Plan: Metrics, alert thresholds, drift detection, and incident runbooks.
Response Framework

Incident & Kill-Switch Protocol

When an AI system misbehaves, response time is measured in minutes, not weeks. This protocol defines detection triggers, escalation tiers, and communication templates so your team responds decisively.

Detection Triggers

Drift Score Breach

Model output deviates beyond acceptable thresholds on accuracy, bias, or quality metrics. Auto-triggers L1 review.

Usage Anomaly

Unexpected volume spikes, novel input patterns, or access from unauthorized contexts. Flagged for immediate triage.

Compliance Alert

Regulatory change, audit finding, or customer complaint triggers policy review and potential system pause.

Escalation Matrix

L1 — Auto-Pause

  • Response: System automatically paused or throttled. Fallback mode activated.
  • Owner: On-call engineering lead
  • SLA: Triage within 30 minutes. Resolve or escalate within 4 hours.

L2 — Council Review

  • Response: System remains paused. Governance council convenes. Root-cause analysis initiated.
  • Owner: AI Governance Council chair + risk/compliance
  • SLA: Council meeting within 24 hours. Written assessment within 48 hours.

L3 — Executive Kill-Switch

  • Response: Full system shutdown. External communications prepared. Regulatory notification if required.
  • Owner: Executive sponsor + general counsel
  • SLA: Kill-switch decision within 2 hours of L2 escalation. Board notification within 24 hours.

Communication Templates

Internal

Stakeholder notification with incident summary, current status, impacted systems, and expected resolution timeline.

Customer

Transparent disclosure of service impact, data exposure assessment, remediation steps, and point of contact for questions.

Regulator

Formal notification per jurisdictional requirements, including incident classification, timeline of events, and corrective actions.

Post-Incident Review: Within 5 business days of resolution, a written review is produced covering root cause, timeline, control gaps, and policy updates. Reviewed by council quarterly and summarized in board reporting.

Policy Framework

Minimum Viable Policy Pack

Policies should be short, enforceable, and aligned to your operating model. Start with this minimum set and expand as your portfolio grows.

Acceptable Use

Approved, prohibited, and restricted AI tools/models across the organization.

Data Use & Privacy

Permissible data categories, retention rules, and sensitive data handling requirements.

Model Risk Management

Validation requirements, bias testing protocols, and documentation expectations by tier.

Vendor & Third-Party

Procurement requirements, security reviews, and contractual protections for AI vendors.

Human Oversight

Where humans must remain in the loop, override capabilities, and escalation paths.

Incident Response

Detection, reporting, remediation, and post-incident communication for AI events.

Executive Visibility

Board Dashboard Preview

Your board and audit committee receive a quarterly AI governance report. Here’s what they’ll actually see—not a slide deck, but a live dashboard built from real governance data.

Q1 2026 AI Governance Report

Enterprise AI Portfolio Health

Updated: March 2026
24
AI Use Cases
+6 this quarter
92%
Classified by Tier
Target: 100%
1
Open Incidents
L1 — under review
87%
Compliance Score
Up from 71% (Q4)

Risk Distribution

T1: 14
T2: 7
T3: 3

58% Tier 1, 29% Tier 2, 13% Tier 3. All Tier 3 systems have active monitoring and incident runbooks.

Key Actions This Quarter

  • Completed Tier 3 red-team assessment for customer-facing chatbot
  • Updated vendor due diligence for 3 new AI tool procurements
  • In progress: EU AI Act readiness gap analysis

Sample dashboard with representative data. Your dashboard is populated from your governance platform with real metrics.

Implementation

30 / 60 / 90 Day Roadmap

Start small, enforce consistently, iterate. This roadmap prioritizes leverage and speed—not perfection.

Days 1–30: Foundation

  • Name an executive sponsor and form a small governance council
  • Define tiering criteria and minimum evidence artifacts
  • Publish acceptable-use and vendor guardrails policies
  • Create a single intake form and approval workflow

Days 31–60: Operationalize

  • Implement monitoring for priority systems (drift, incidents, value KPIs)
  • Establish incident response playbook and escalation paths
  • Integrate control gates into delivery pipelines
  • Begin quarterly reporting to executives and board

Days 61–90: Harden

  • Expand the policy pack and training program for teams
  • Harden vendor due diligence and contractual controls
  • Introduce periodic audits for high-impact systems
  • Iterate governance based on metrics and incident learnings
Measuring What Matters

Success Metrics & KPIs

Governance without measurement is governance theater. These KPIs track whether your framework is actually working—not just whether it exists on paper. We recommend reviewing monthly with council and quarterly with the board.

KPI Target Measurement Method Review Cadence
AI use cases classified by tier 100% Intake system coverage / total known AI use cases Monthly
Mean time to governance approval < 10 business days Intake submission to council/team-lead sign-off Monthly
Incident detection → response < 30 min (L1) Alert timestamp to first responder acknowledgment Per incident
Policy coverage ratio ≥ 6 core policies Minimum viable policy pack completeness Quarterly
Evidence artifact completeness ≥ 90% (Tier 2-3) Required artifacts present / required artifacts total Per release gate
Board reporting compliance 100% on schedule Quarterly reports delivered on time with required metrics Quarterly
Post-incident review completion ≤ 5 business days Incident resolution to written review delivery Per incident
Training completion rate ≥ 95% Employees completing AI governance training / required total Quarterly

Tip: Start by tracking 3–4 KPIs in Month 1. Add the rest as your governance program matures. The goal is actionable visibility, not dashboard overload.

Common Questions

FAQ

What data do you need access to?
We do not require access to your production systems or sensitive data. Our advisory work is based on documentation review, stakeholder interviews, and architecture walkthroughs. Where access is needed for technical assessment, we work under NDA with scoped, time-limited credentials. We never request standing access—all credentials are time-bounded and revoked at engagement end.
Can we start under NDA before procurement?
Yes. We routinely execute mutual NDAs before any substantive discussion. This allows us to review your current state and provide an informed scope proposal before formal procurement begins. Most clients use this pre-engagement period to validate fit before committing budget—typically a 2–3 week window with no obligation.
Do you retain client materials? For how long?
Client working materials (decks, datasets, meeting notes) are purged within 30 days of engagement close. Engagement records (contracts, invoices, compliance documentation) are retained per standard business and regulatory requirements—typically 7 years for financial records. Specific retention windows are documented in our Privacy Policy and Procurement Packet. We can accommodate custom retention requirements if your organization requires shorter windows.
Do you subcontract governance work?
No subcontracting without explicit client approval. All advisory work is performed directly by our team unless otherwise agreed in writing. If specialized expertise is needed (e.g., regulatory counsel in a specific jurisdiction), we will propose and seek written approval before engaging any third party. Your data never leaves our direct control without your consent.
What does the first two weeks look like?
Week 1: Kickoff meeting, stakeholder mapping, current-state documentation review, initial tiering assessment, and an inventory of existing AI use cases. Week 2: Gap analysis against your target regulatory posture, draft governance charter, preliminary roadmap, and initial council composition recommendations. You receive a written deliverable at the end of week 2 with prioritized next steps, estimated effort, and quick wins you can implement immediately.
How does this differ from the free governance framework?
The free framework is a self-serve starting point with generic templates. Advisory engagements customize everything—tiering criteria, policy language, council composition, evidence requirements, and reporting cadence—to your organization’s specific risk profile, regulatory environment, and operating model. Think of the free framework as the “what” and advisory as the “how, tailored to your context.”
How does this map to NIST AI RMF?
Our framework aligns directly with the NIST AI Risk Management Framework across all four functions: Govern, Map, Measure, and Manage. Risk tiering maps to MAP 1.1 and 1.5, the governance council satisfies GOVERN 1.1–1.2, evidence artifacts address MAP 3.4 and MEASURE 2.6, and our incident protocols cover MANAGE 4.2–4.3. See our Regulatory Alignment Matrix for the complete mapping, including EU AI Act, SEC guidance, and ISO/IEC 42001 crosswalks.
What does the board dashboard look like?
The quarterly board report is a live dashboard—not a static slide deck. It shows four headline metrics (AI inventory count, tier distribution, open incidents, and compliance score) plus risk distribution charts and key actions for the quarter. Executives see what matters without wading through technical detail. See the Board Dashboard Preview section above for a sample with representative data. Your dashboard is populated from real governance platform data.

Ready to Govern AI with Confidence?

Start with a free assessment—we’ll identify your gaps and recommend the lightest path to board-defensible governance.

Disclaimer: This content is informational only and does not constitute legal, regulatory, or compliance advice. Implementation varies by client environment; controls are tailored by risk tier, regulatory context, and organizational maturity. Consult qualified legal and compliance counsel for your specific situation.
Looking for the self-serve starter pack? Download the free AI Governance Framework →
Enterprise AI GovernanceReady to get started?
Book Governance Call Procurement Materials Free AI Assessment