Skip to content

The HITRUST AI Assurance Program provides a secure and sustainable strategy for trustworthy AI leveraging the HITRUST CSF, AI specific assurances, and shared responsibilities and inheritance

HITRUST, the information risk management, standards, and certification body, today published a comprehensive AI strategy for secure and sustainable use of AI. The strategy encompasses a series of important elements critical to delivery of trustworthy AI. The resulting HITRUST AI Assurance Program prioritizes risk management as a foundational consideration in the newly updated version 11.2 of the HITRUST CSF. HITRUST is also announcing AI risk management guidance for AI systems soon to follow, as well as the use of inheritance in support of shared responsibility for AI and an approach for industry collaboration as part of the AI Assurance Program.

AI, and more specifically, Generative AI, made popular by OpenAI's ChatGPT, is unleashing a technological wave of innovation with transformative economic and societal potential. Goldman Sachs Research predicts that Generative AI could raise global GDP by  7% over the next 10 years . Organizations are eager to transform their operations and boost productivity across business functions ranging from customer relationship management (CRM) to software development in order to unlock new layers of value through a growing evolution of enterprise AI use cases. However, any new disruptive technology also inherently delivers new risks, and Generative AI is no different.

AI foundational models now available from cloud service providers and other leading organizations allow organizations to scale AI across industry use cases and specific enterprise needs. But the opaque nature of these deep neural networks introduces data privacy and security challenges that must be met with transparency and accountability. It is critical for organizations offering AI solutions to understand their responsibilities and ensure that they have reliable assurances for their service and solution providers.

The HITRUST AI assurance program builds upon a common, reliable, and proven approach to security assurance that will allow organizations implementing AI and the AI models and services to understand the risks associated, and reliably demonstrate their adherence, with AI risk management principles using the same transparency, consistency, accuracy, and quality available through all HITRUST Assurance reports.

"Risk management, security and assurance for AI systems requires that organizations contributing to the system understand the risks across the system and agree how they together secure the system," said  Robert Booker, Chief Strategy Officer, HITRUST. "Trustworthy AI requires understanding of how controls are implemented by all parties and shared and a practical, scalable, recognized, and proven approach for an AI system to inherit the right controls from their service providers. We are building AI Assurances on a proven system that will provide the needed scalability and inspire confidence from all relying parties, including regulators, that care about a trustworthy foundation for AI implementations."

Today, organizations can deploy Generative AI large language models (LLMs) through a variety of methods. These include self-hosting LLMs on-premise or delivering or accessing a LLM through a service provider. Each method comes with differences in how LLMs can be built, trained, and tuned, as well as different shared responsibilities for managing security and data privacy risks.

Cloud service providers are building AI on their cloud foundations, and already assist thousands of organizations in achieving HITRUST certification more quickly through the hundreds of Shared Responsibility and Inheritance control requests they receive daily. This provides their customers the benefit of importing and inheriting the strong controls and assurances provided by their existing HITRUST certifications. Adding AI to the HITRUST CSF extends this proven approach to help organizations also provide assurances around their use of and reliance on AI.

Microsoft Azure OpenAI Service supports HITRUST maintenance of the CSF and enables accelerated mapping of the CSF to new regulations, data protection laws, and standards. This in turn supports the Microsoft Global Healthcare Compliance Scale Program, enabling solution providers to streamline compliance for accelerated solution adoption and time-to-value.

"At Microsoft, we are committed to a practice of responsible AI by design, guided by a core set of principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. We are putting these principles into practice across the company to develop and deploy AI that will have a positive impact on society," said  John Doyle, Global Chief Technology Officer, Healthcare and Life Sciences, Microsoft.

"In our own use of Azure OpenAI Service, we have seen a significant acceleration in mapping of the HITRUST CSF to new authoritative sources," said  Robert Booker. "The business impact of the use of generative AI is clear, as is the necessity of an AI assurance program to appropriately manage risk."

AI systems are made up of the system that is using or consuming AI technologies, the organizations that are providing the AI service, and in many cases, additional data providers supporting the machine learning system and large language model underpinning the system. The context of the overall system on which AI is delivered and consumed is critical to understand as is the benefit of partnering with high-quality AI service providers that provide clear, objective, and understandable documentation of their AI risks and how those risks, including security, are managed in their services. Users of AI services can leverage the capabilities of those high-quality service providers as part of their overarching risk management and security system accompanying their deployment of AI with resulting increase in efficiency and trustworthiness of their systems if the provider is committed to an approach that supports inheritance and shared responsibility.

"AI has tremendous social potential and the cyber risks that security leaders manage every day extend to AI. Objective security assurance approaches such as the HITRUST CSF and HITRUST certification reports assess the needed security foundation that should underpin AI implementations," says  Omar Khawaja, Field CISO of Databricks. "Databricks is excited to be working with HITRUST to build on this important foundation and to significantly reduce the complexity of risk management and security for AI implementations across all industries."

The HITRUST AI Assurance Program enables AI users to engage proactively and efficiently rely on risk management considerations around AI and begin discussing shared risk management with their AI service providers. The resulting clarity of shared risks and accountabilities will allow organizations to place reliance on shared information protection controls that already are available from internal shared IT services and external third-party organizations, including service providers of AI technology platforms and suppliers of AI-enabled applications and other managed AI services. More specifically, both AI users and AI service providers may add AI risk management dimensions to their existing HITRUST e1, i1, and r2 assurance reports and use the resulting reports to demonstrate the presence of AI risk management on top of robust and provable cybersecurity capabilities. This will support demonstrated cybersecurity today as HITRUST and industry leaders regularly add additional control considerations to the AI Assurance Program.

To find out more about the HITRUST AI Assurance Program and join the discussion, view our AI Hub

Subscribe to get updates,
news, and industry information.


Chat Now

This is where you can start a live chat with a member of our team