Back to Regulatory Home

NIST AI RMF Overview

The NIST AI Risk Management Framework — its four core functions (GOVERN, MAP, MEASURE, MANAGE), intended users, and relationship to the broader NIST framework family.

6 min
article

The NIST AI Risk Management Framework (AI RMF 1.0) was published in January 2023 and quickly became the de facto risk management reference for US organizations. Unlike a regulatory requirement, it's voluntary — but its structure has influenced AI governance programs, vendor requirements, and federal contracting since publication.

The Four Core Functions

The AI RMF organizes risk management around four functions:

GOVERN: Establish the organizational culture, policies, and accountability structures for AI risk management. This function is foundational — the other three operate within its structure.

MAP: Identify and categorize AI risks in context. This includes understanding who is affected, what the system does, and which risks are most relevant to this deployment.

MEASURE: Analyze and assess AI risks using both quantitative and qualitative methods. This is where testing, evaluation, and metrics come in.

MANAGE: Prioritize and address AI risks based on their severity and likelihood. Includes response planning, incident management, and continuous improvement.

Intended Users

The AI RMF is designed for AI developers, operators, and the organizations that deploy AI systems. It's intentionally sector-agnostic and use-case-agnostic — meant to apply to everything from recommendation systems to autonomous vehicles.

The NIST RMF Playbook provides additional implementation guidance with specific suggested actions for each sub-category.

Continue Learning

This is a free preview module. Method 9 members access the full library of compliance frameworks, assessment tools, and implementation templates.

Explore Membership