Skip to content
 

Industry leaders unite and pioneer the path to AI Assessment and Certification

FRISCO, Texas, March 7, 2024/PRNewswire

HITRUST, the leader in cybersecurity assurance, today announced the formation of the HITRUST AI Assurance Working Group. This pioneering initiative aims to establish a model for security control assurances for AI systems, supporting HITRUST's groundbreaking efforts to offer a path to AI Assessment and Certification. The Working Group has united industry experts and leaders from AI providers and early adopters, focusing on the shared goal of ensuring that both users and providers of AI systems manage the security risks associated with their AI models and services in a transparent, consistent manner that stakeholders can trust.

The assurance of AI security controls is essential to building trust in use of AI technologies in business and must be scalable to ensure that these controls are properly implemented and effective. The HITRUST AI Assurance Working Group is dedicated to helping HITRUST create a practical approach to security controls and assurances. This approach is suitable for systems developed internally and those built on common large language models and service environments. A key aspect of this initiative is enabling consumers of AI models and relying parties to understand, demonstrate, and validate the security measures implemented across the services environment, in context of business risk. This effort will utilize the HITRUST shared responsibility and control inheritance model, widely adopted by leading cloud service providers and key players in AI and machine learning systems and increasingly leveraged by assessed entities to streamline the assessment process.

Given the dynamic nature of AI security and evolving regulatory requirements, the scope and objectives of the Working Group will continue to adapt. Initial focus areas include identifying AI security risks, enumerating AI-focused inherent risk factors, and developing a shared responsibility model, among others. These efforts will leverage the HITRUST CSF v 11.2 and harmonized from other emerging AI standards such as the NIST AI Risk Management Framework (RMF), and the ISO AI Risk Management Guidelines (ISO 23894).

Robert Booker, Chief Strategy Officer for HITRUST, emphasized the importance of the initiative, stating, "As AI continues to rapidly scale across industries, the need for organizations offering AI solutions to understand the risks that AI systems contribute to their business and to their customers, what their responsibilities are to manage that risk, and to ensure that they have reliable security assurances from their service and solution providers, continues to grow. The Working Group will help define the future of AI Assurances by focusing on practical, scalable, and provable approaches to security and risk management for AI systems that inspire trust for all relying parties."

Current Working Group members bring a wealth of knowledge and expertise in AI and security assurances, representing a broad spectrum of industries including healthcare and technology. Participatory members are committed to reviewing and commenting on the harmonized security approaches and controls that are transparent, appropriately prescriptive, and scalable. This collaboration will enable AI service providers and users to demonstrate their AI systems' security risk management effectively, commensurate with the identified risk levels.

"AI adoption in the enterprise is experiencing unprecedented growth and business leaders across every industry are seeking assurance in mitigating AI risk. This is a pivotal moment to get AI security and trust right and HITRUST is leading the charge towards actionable, real-world solutions for AI assurance in healthcare," said Omar Khawaja, Field CISO at Databricks. "Together with HITRUST, we are defining the future of AI security and trust with critical standards that global businesses can depend on."

The Working Group aligns its efforts with the rapidly evolving public sector and regulatory engagement in AI, focusing on actionable implementations of emerging standards and guidance. Anticipated deliverables include:

  • AI and ML inherent risk factors
  • AI security control requirements
  • AI Security Shared Responsibility Model
  • AI inputs for the HITRUST CSF Roadmap
  • AI risk management assurance reports
  • AI use cases

This initiative marks the second step in the HITRUST path to AI Assessment and Certification in 2024, following enhancements to the HITRUST Common Security Framework (CSF) v11.2.0 to include AI risk management controls, and the launch of its AI Assurance Program and strategy document last year. Organizations can download these resources today to begin understanding AI risk factors and start their planning and initial work.

In Q2 2024, HITRUST plans to release its first AI Insight Report for organizations using HITRUST Risk-based (r2) Assessment to run a topic-specific report on the status and state of their AI risk management posture. Then, in 2H 2024, the company expects to expand the control requirements and release a comprehensive set of affordable and accessible assessment options for organizations using HITRUST Essentials (e1), Implemented (i1) and Risk-based (r2) Assessments along with associated training for its large assessor network.

"We are delighted with our progress on AI. We know the market desperately needs real, operational solutions to assess their AI risks as soon as possible, and I am confident HITRUST will be at the forefront fulfilling that need as we always have," added Booker.

Resources:

<< Back to News Next Press Release >>

Subscribe to get updates,
news, and industry information.

Chat

Chat Now

This is where you can start a live chat with a member of our team