Skip to content

Robyn Barton, HITRUST Practice Leader, and Jesse Goodale, Senior Manager, LBMC

How NIST AI RMF 1.0 provides guidance on controls around autonomous computing and what that could mean to your organization’s set of controls.

The National Institute of Standards and Technology (NIST) released its first Risk Management Framework (RMF) around Artificial Intelligence (AI) this past January. This was in response to a global need to address the unique risks of information systems that are rapidly becoming autonomous and generative. These information systems have several unique requirements and capabilities that often alter the required risk focus of a traditional information system. As a result, operators may need to build, augment, enhance, or reprioritize their current control set to accommodate these new capabilities.

Redefining risk

As part of the new RMF, NIST has differentiated AI-specific risk from its previously established RMF. The goal is to identify and quantify the unique risks inherent to the characteristics, operations, and requirements of AI technology. Each step of an AI workflow has the potential to alter the risk environment of an organization, including some of the following common areas.

Data inputs

Data ingested by AI technology may lack context, accuracy, completeness, necessary iterations, or relevance. The resulting outputs may reflect this skew.


Model change may result in adverse effectiveness of the solution.

Statistical Reliability

Due to the opacity of the learning process, emergent trends, reproducibility, and unidentified bias may occur.

Data Outputs

PHI/PII may manifest through correlations within data outputs.

Depending on the intended use of the AI technology, there may be a need to refocus controls to mitigate some or all the emerging risks.

Updating controls

NIST has responded to these emerging risks by applying an AI lens to its standard “Govern, Map, Measure, Manage” approach.


Two key areas of focus are: 1. Ensure current internal mechanisms such as operations, third-party management, and risk management functions have controls integrated into the established processes specific for AI; and 2. Scan for external factors, such as regulatory updates, which may require additional controls updates.


The key is to identify the purpose, scope, capabilities, data flows, and requirements of the AI technology and to map those to the associated risks.


Here, the focus is on the unique risks of AI technology and how the mapping of those risks is quantified in a manner that provides meaningful insight into how risks are being managed continuously.


Manage focuses on applying expected risk hygiene controls to the AI technology. This includes risk management and evaluation, resources and sustainability, recovery from new/unidentified risks, and system and data life-cycling controls.

Once risks and controls are reevaluated, the framework provides guidance around establishing “use case profiles,” which package a particular AI workflow into a tailored RMF for each AI solution.

Potential impact on HITRUST

As AI technology can fundamentally change the people, processes, and technology within an organization, there may be updates to an organization’s risks and controls, impacting the scope of an organization’s HITRUST assessment. This could include the following.

  • Due to the iterative nature of AI, a significant increase in transaction count, data quantity, connectors, third-party data, or other scoping factors may lead to an increase in baselines for an organization that wishes to complete an r2 assessment.
  • On account of a unique development process and resource requirements, an organization may choose to implement separate controls around processes such as configuration management and the systems development lifecycle.
  • As data can be significantly modified, correlated, aggregated, and transformed, there may be a need to remap an organization’s covered information footprint, bringing in additional people, processes, and technology. This includes the establishment of additional covered information through data correlations.
  • As NIST is one of HITRUST’s authoritative sources, there is the potential for a more direct impact within the HITRUST framework as this guidance becomes more widely adopted. For more information, check out the HITRUST AI Assurance Program and download the strategy document.

How LBMC can help

As AI and its uses continue to evolve, we expect to see formal guidance and industry good practices evolve. As a result, changes within your organization to adapt to the ever-evolving risk landscape may result in the need to reevaluate your HITRUST scope and approach. Whether you are starting your HITRUST journey or have been on this ride for years, LBMC is here to help you navigate the challenging landscape of AI. As the leader of the “10-year club” of HITRUST assessors, LBMC stands as one of the longest-serving assessors in the business, with the most experienced team in the industry. LBMC has helped countless organizations reach their HITRUST CSF certification goal and learned many lessons along the way. LBMC feels compelled and somewhat obligated to offer encouragement and advice to those embarking on this journey. Please reach out any time for assistance on your journey!

Subscribe to get updates,
news, and industry information.


Chat Now

This is where you can start a live chat with a member of our team