The pace of AI proliferation is staggering. Companies are rapidly adopting AI tools to streamline data analysis, automate processes, and enhance customer experiences. Industry leaders are using AI. Your competitors are adopting AI. And you, too, are bringing the technology to your organization.
But how do you know if your AI system is trustworthy? Is the introduction of AI exposing your organization — and your customers — to new security and privacy risks? Is your organization prepared to face potential AI threats?
Many legal, ethical, and operational considerations come into play when using AI, and the pace of innovation means that cybersecurity leaders may not have the playbooks to stay ahead. This makes collaboration among industry technology and cybersecurity leaders critical. Most will find that the next couple of years require them to continue to innovate while they’re simultaneously discovering not only what AI can do but also what new risks it creates and how best to address them. Some existing practices will translate to controlling these risks, while new approaches will likely need to be introduced. Going it alone will not be an effective strategy.
How HITRUST is enhancing AI risk management
HITRUST has been bringing organizations, cloud service providers, and vendors together to share responsibilities based on an inheritable control framework. Based on these principles, it recently launched the HITRUST AI Assurance Program. The HITRUST AI Assurance Program is the first and only assurance program that will enable organizations to demonstrate and share control assurances for Generative AI and other AI models.
HITRUST is driving collaboration from working groups to identify practical and scalable assurances for AI risk management, security, and privacy. With efficient risk management and assurances, organizations can leverage the benefits of AI while eliminating associated risks.
Potential AI risks
When working with AI systems, you need to consider and analyze potential risks. Inaccuracy in the training data can lead to incorrect results. AI models may produce inappropriate results if the input information is incomplete or biased. Besides, there are operational risks, too. You must ensure AI systems are functioning as intended. Although AI systems are capable of carrying out automated tasks on their own, they learn based on experience. Therefore, humans need to monitor and check them regularly.
AI risk management
AI risk can disrupt businesses and cause huge losses. HITRUST is taking the right steps to help you manage this risk. HITRUST has incorporated AI risk management in the HITRUST CSF v11.2, released in October 2023.
The HITRUST CSF offers a foundation for AI risk management and is updated regularly as new controls are identified so that it stays relevant as AI risks, security controls, and regulatory requirements continue to evolve.
The current CSF version includes two risk management sources. NIST AI Risk Management Framework enforces trust in the design, development, and use of AI. ISO AI Risk Management Guidelines (ISO 23894) allows organizations to manage AI risks while producing, deploying, or using the technology. HITRUST will incorporate additional sources in 2024 to make the framework more robust.
AI systems are built upon existing IT systems, which may be protected with existing IT security controls identified in the CSF. HITRUST’s AI updates provide added security considerations for these systems. And, with AI assurance, HITRUST will be offering an effective mechanism of standards, testing, evaluation, and certification. It’s the first step in helping you boost trust in the AI systems and translate the benefits for your organization and key stakeholders.
AI is a game-changing technology. In this competitive world, organizations cannot stay away from adopting AI technologies. As an industry, we must collaborate to be responsible and prepared while using AI securely.