HITRUST is actively paving the way for trustworthy AI assurances. The Q4 2023, v11.2.0 update to the HITRUST CSF makes it the first and only system to offer control assurances for generative AI and other related applications. HITRUST will offer AI assurance reports starting in 2024, which will enable organizations to demonstrate their AI cyber maturity.
HITRUST Shared Responsibility and Inheritance Program
Since 2007, HITRUST has been leading the development of a trusted and reliable assurance system. Its varied offerings support organizations in streamlining their assurance journeys. The HITRUST Shared Responsibility and Inheritance Program provides significant efficiencies. HITRUST clearly defines shared responsibilities between customers and their service providers. It enables organizations to plan security accountabilities.
The Inheritance Program allows organizations to reuse previously certified controls. These could be their own controls from past assessments or controls from their third parties, such as cloud service providers (CSPs). As most major CSPs already have HITRUST certifications, organizations working with them can streamline their assessment processes by inheriting controls. Customer organizations can eliminate redundancy and save time, effort, and money in their certification journeys. They can inherit up to 85% of the requirements in a HITRUST e1 Validated Assessment and up to 70% in a HITRUST r2 Validated Assessment.
Shared responsibility and inheritance with AI
Developing on similar principles, HITRUST will offer shared responsibility and inheritance capabilities for AI risk management. The AI capabilities in HITRUST CSF v11.2.0 ensure organizations have the necessary security controls in place. As AI risk management becomes a part of the HITRUST CSF, the shared responsibility model applies here, too.
AI service providers like large language model providers and AI users must understand their shared responsibilities. From the beginning, AI service providers need to clarify the duties of their models. Customer organizations must evaluate whether the model suits their business and follows essential cybersecurity practices. Both parties must work together to identify potential AI risks and develop corresponding mitigation plans.
AI users and service providers can mutually agree on the allocation of shared responsibilities. It is essential for them to know which party owns the responsibility for model training, tuning, and testing and for what contexts. This includes identifying risks, planning control dimensions, and understanding the implementation of AI systems.
Organizations must identify the training data and check it for quality and relevancy. They need to check if proper controls are implemented to protect the data. Organizations must recognize biases in the data and try to minimize them. Lastly, they need to consider and reassess newly created or modified data.
With the HITRUST Shared Responsibility and Inheritance Program, organizations working with AI service providers can achieve their security certifications more quickly and efficiently. They can be assured that their AI systems are robust and protected with appropriate security safeguards. HITRUST, along with leading AI service providers, is making every effort to help organizations manage AI risks efficiently.
Learn more about the HITRUST AI Assurance Program to understand the different approaches HITRUST is taking to drive trust in AI systems.