NIST AI Risk Management Framework (RMF)

Updated on

What is the NIST AI Risk Management Framework

The NIST AI RMF framework helps organizations identify and plan for risks posed by using generative AI. Developed by the US National Institute of Standards and Technology (NIST), the framework promotes trustworthy and responsible AI development and use.

 

NIST AI Risk Management Framework Summary

 By focusing on risk identification, assessment, and mitigation, organizations can use the NIST AI Risk Management framework to develop and deploy AI responsibly. Key components include:

  • Risk Identification and Assessment: The framework helps organizations identify potential risks throughout the AI system’s lifecycle, including data, algorithms, and deployment.
  • Risk Management Strategies: It outlines strategies for mitigating risks, such as data management, incident response, and insurance.
  • Governance and Accountability: The AI RMF emphasizes the importance of establishing clear roles and responsibilities for AI governance.
  • Trustworthy AI: The framework promotes the development of AI systems that are valid, reliable, safe, secure, accountable, transparent, explainable, privacy-enhanced, and fair. 

Key steps followed in the framework include Govern, Map, Measure, Manage

NIST AI Risk Management Framework (RMF)

 

The NIST standard specifically addresses the unique challenges posed by generative AI and Large Language Models. It provides a general foundation for managing AI risks.

 

Artificial Intelligence Governance Roles & Best Practices

Effective Artificial Intelligence (AI) governance will involve teams across the organization, including a range of disciplines and responsibilities:

  • Executive Leadership: provides funding and sets goals and governance standards
  • Architects: Map AI initiatives to business strategy: analyze how AI systems interact with existing infrastructure and business processes, monitor performance and risks of AI systems
  • AI Governance & Data Governance teams: Develop policies and standards for AI development & deployment. Oversee risk management, ensure data used for AI is accurate and reliable, oversee data privacy
  • Security Managers: Manage AI security risks, ensure compliance with security standards

By working together, these teams can create a robust AI governance framework.

Enterprise Architecture Essentials Conference

What Makes an AI System Trustworthy?

The NIST AI Risk Management Framework (AI RMF) places a strong emphasis on AI trustworthiness. It defines seven core characteristics of a trustworthy AI system:

Valid and Reliable: The AI system produces accurate and consistent results

Safe: The AI system operates without causing physical or economic harm

Secure and Resilient: The AI system is protected from cyberattacks and can recover from disruptions

Accountable and Transparent: Clear lines of responsibility are established, and the AI system’s decision-making processes are understandable

Explainable and Interpretable: The AI system’s outputs can be understood by humans

Privacy-Enhanced: The AI system protects user data and privacy

Fair with Harmful Bias Managed: The AI system avoids unfair biases and discrimination

 

Which Risk Management Techniques Can Reduce AI Risk?

The NIST AI RMF outlines a range of risk management approaches which can be put in place to manage AI risk well. These include:

  • Data management strategies
  • Decommissioning procedures such as “kill switches”
  • Incident response plans
  • Insurances, warranties, and other risk transfer mechanisms
  • ML and endpoint security countermeasures
  • Model artifact editing and modifications

 

How Can Enterprise Architects Manage AI Risk?

By following the NIST AI Risk Management Framework and other architecture best practices, teams can set up AI systems which are developed and deployed responsibly, driving business value while mitigating potential risks.

The framework is now available as part of every ABACUS installation, and can be used as-is, or customized to fit each organization.

Business capabilities and systems can be assessed and scored against the AI RMF functions. ABACUS can be used to put together profiles, both to assess organizational compliance as a whole and for specific systems. Teams can then view the upstream and downstream impacts of those AI risks on people, processes, and technologies. ABACUS users can set up scoring and compliance metrics to get an objective view into how AI risks affect enterprise systems.

 

NIST AI Risk Management Framework Example

The ABACUS dashboard below demonstrates how the NIST AI framework can be applied to a specific AI application within an organization, in this case, a hiring tool.

We can assess and prioritize risks associated with the technology and create mitigation plans based on suggested actions from the NIST framework.

We can also see the upstream and downstream dependencies of those applications, and trace how risks might spread. This allows architects to understand and report on which teams are responsible for these technologies and mitigation plans, the processes that these technologies are used in, and the overall business impact of these risks.

NIST AI RMF in ABACUS tool
Create a profile around a hiring tool, and the AI systems involved, using the NIST AI RFM in the ABACUS toolset

 

Ready to upgrade your risk management tools?

Schedule a Demo
Back to all news