Cichocki Advisory
Cichocki Advisory · Origin

Why Cichocki Advisory Built Its AI Governance Framework

The origin story of the Cichocki Advisory AI Governance Framework — what we kept seeing in board rooms, why existing frameworks weren't enough, and what we designed instead.

The Cichocki Advisory AI Governance Framework didn't start as a framework. It started as a pattern we kept seeing in board rooms — a specific kind of governance failure that the published frameworks (NIST AI RMF, ISO/IEC 42001, sector-specific guidance) acknowledged but didn't directly fix.

This is the story of why we built it, and what makes it different from a published standard.

The pattern we kept seeing

Boards were approving AI governance programs that read well on paper and produced nothing operationally. The slide decks referenced the right frameworks. The policies were thoughtful. The risk taxonomies were complete. And six months later, when a Tier 1 AI system needed approval — a customer-facing model, a regulatory-impact use case — the governance machinery wasn't there. The risk team escalated to the executive sponsor; the executive sponsor escalated to the board; the board defaulted to "no, slow down."

This wasn't a vocabulary problem. The published frameworks were correct. It was an operational gap: the work between "we adopted NIST AI RMF" and "we have a working AI governance machine that produces evidence the board can examine" was undefined. Each organization was reinventing the operational layer.

What existing frameworks don't give you

We're explicit fans of the published frameworks. We use NIST AI RMF as the risk taxonomy in nearly every engagement. We crosswalk to ISO/IEC 42001 when certification is on the horizon. The crosswalk between the two is one of our most-referenced internal artifacts.

But the published frameworks deliberately stop short of:

These gaps are features of the frameworks, not bugs — they keep the standards portable across industries and jurisdictions. But they're exactly where most enterprise governance programs stall.

What we built instead

The Cichocki Advisory framework is opinionated where the published frameworks are deliberately silent. Specifically:

1. A risk-tiering rubric

Four tiers based on three dimensions: autonomy (does the system act, or does it advise?), consequence (what's the worst-case outcome of a wrong action?), and data sensitivity. The tiering rubric is concrete enough that two people in the same organization independently classify the same system the same way 80%+ of the time. Without that, governance investment fragments.

2. A decision-rights matrix template

By role — not by individual — for AI-specific decision classes. The matrix specifies who approves new AI use cases, who approves model updates, who approves data sources, who escalates incidents. Most published frameworks acknowledge this matrix is needed. We provide the template that makes the matrix actually exist in 30 days, not 6 months.

3. A lifecycle gating model with named exit criteria

Six gates: ideation, design review, pre-deployment validation, deployment approval, post-deployment monitoring, decommissioning. Each gate has a documented exit-criteria evidence package — what specifically must exist to pass the gate, who reviews it, where it's stored.

4. A board reporting template

A four-quadrant maturity view with a quarterly delta and a forward-look. Boards we work with adopt this format because it answers the questions boards actually ask, and because it gives the executive sponsor a stable rhythm.

5. A continuous-improvement loop

The mechanism by which operational governance feedback flows back into policy and process. This is what prevents the failure mode where policies stay static while AI deployment evolves around them.

What makes this not just another framework

We didn't set out to publish a competing standard. The framework exists because every Cichocki Advisory engagement was building these artifacts from scratch, and the artifacts were 80% the same across organizations. The framework is the part that's reusable — risk-tiering rubric, decision-rights template, lifecycle gates, board reporting format. The other 20% is what an engagement is for: the parts that have to be customized to the specific organization, regulatory environment, and operating model.

When boards ask us "is this just NIST AI RMF rebranded?", the honest answer is: no, but it's also not a replacement. The Cichocki Advisory framework sits on top of NIST AI RMF (and ISO/IEC 42001 where relevant) and fills in the operational layer those frameworks intentionally leave open. Our framework is a methodology, not a standard.

Where this connects to ThreadSync

The framework's most opinionated artifacts — lifecycle gates, evidence packages, telemetry — were the seed of ThreadSync, the governed-AI platform Cichocki Advisory founded. The story of how the advisory work led to the platform is told separately. Short version: when you build evidence packages and lifecycle gates manually for enough enterprise customers, you eventually realize the controls themselves should be operational software, not Confluence pages.

Where to read it

The framework lives at cichocki.com/governance/. The free downloadable version covers the rubric, decision-rights template, and lifecycle model. Engagement-grade work customizes everything to the specific organization — regulatory environment, operating model, risk appetite — and provides the implementation muscle that the published version intentionally leaves to the reader.

If your board is asking "do we have an AI governance program that we can defend?" and the answer isn't crisp, book a discovery call.

Work with Cichocki Advisory

Cichocki Advisory provides board-ready AI governance, AI strategy, and platform architecture for executives navigating enterprise AI transformation. Engagements work under NDA with scoped, time-limited credentials.

Book Advisory Call →