Cichocki Advisory
AI Governance Frameworks

ISO/IEC 42001 vs. NIST AI RMF: A Cichocki Advisory Crosswalk for U.S. Enterprises

How U.S. enterprises should think about ISO/IEC 42001 alongside NIST AI RMF — what overlaps, what diverges, and where Cichocki Advisory recommends boards put their AI governance investment.

For most U.S. enterprises, the practical question isn't whether to align with the NIST AI Risk Management Framework (AI RMF) or ISO/IEC 42001 — it's how to do both without duplicating work, fragmenting controls, or running two parallel governance programs.

At Cichocki Advisory, we routinely help boards and risk committees crosswalk these two frameworks. The short version: NIST AI RMF gives you the operational language and risk taxonomy; ISO/IEC 42001 gives you the auditable management system. They're complementary, not competitive — but the integration is non-trivial.

What each framework actually does

NIST AI RMF is a voluntary risk-management framework organized around four functions: Govern, Map, Measure, Manage. Its strength is granularity at the risk and control level — the framework provides explicit profiles, characteristics of trustworthy AI, and crosswalks to other standards. It does not prescribe a management-system architecture.

ISO/IEC 42001:2023 is a certifiable AI management system standard. It uses the same Annex SL high-level structure as ISO 27001 and ISO 9001, which means it's optimized for organizations that already operate certified management systems and want a third-party-attestable AI governance layer. It prescribes how the management system should be governed, but is intentionally light on the AI-specific risk taxonomy.

Where they overlap

Both frameworks converge on the same fundamentals:

Where they diverge

The divergence is where Cichocki Advisory engagements spend most of their time:

DimensionNIST AI RMFISO/IEC 42001
FormVoluntary risk framework + profilesCertifiable management system
Adoption signalInternal practice; no third-party attestationAudit-ready certification
Risk granularityHigh — explicit categories & subcategoriesLower — refers to risk; doesn't enumerate
Management-system overheadLowHigh (documented procedures, internal audits, management review)
U.S. regulatory tractionStrong (referenced by federal guidance)Growing (especially regulated industries)
EU tractionRecognizedStrong (frequently cited in EU AI Act conformity discussions)

How Cichocki Advisory recommends boards sequence the two

For most U.S. enterprises starting their AI governance journey, our recommendation is sequenced rather than parallel:

  1. Start with NIST AI RMF. Use it to build the internal vocabulary, identify your highest-risk AI use cases, and design controls with operational granularity. This is the work that makes board reporting credible.
  2. Layer ISO/IEC 42001 when you need third-party attestation. Most organizations don't need certification on day one. They need it when customers, regulators, or insurers start asking. Building the management-system architecture early — with ISO 42001 in mind — costs less than retrofitting.
  3. Crosswalk continuously. Maintain a single control library that maps each control to both frameworks. This is the artifact that prevents drift and lets you respond to either an internal NIST AI RMF self-assessment or an external ISO 42001 audit without rebuilding evidence.

Where the Cichocki Advisory governance framework fits

Our framework explicitly maps each control to both NIST AI RMF subcategories and ISO/IEC 42001 clauses. This crosswalk lives at the control level — not the framework level — because every real engagement we run reveals organization-specific gaps that one framework catches and the other misses.

Boards that adopt the Cichocki Advisory 90-day implementation roadmap reach a state where they can credibly answer either framework's questions in a board meeting — without needing the audit team to re-do the work.

What to do this week

If you're an executive or board member trying to figure out where to start:

  1. Inventory your AI use cases by risk tier — not just sensitivity, but autonomy and consequence.
  2. Map your existing controls to both NIST AI RMF subcategories and ISO/IEC 42001 clauses. You'll find more coverage than you expected, and very specific gaps.
  3. Decide on attestation timing. If you need ISO certification within 12 months, the management-system work has to start now. If you don't, NIST AI RMF self-assessment is enough for this fiscal year.
  4. Document the decision. Whatever you choose, make sure the board has it on record with rationale, success criteria, and review cadence.

If you'd like to discuss how this maps to your specific environment, book a discovery call.

Work with Cichocki Advisory

Cichocki Advisory provides board-ready AI governance, AI strategy, and platform architecture for executives navigating enterprise AI transformation. Engagements work under NDA with scoped, time-limited credentials.

Book Advisory Call →