Skip to content
 

Businesses across the globe are unleashing the power and impact of generative AI. The launch of ChatGPT in November 2022 created a buzz for generative AI tools by exposing the power of AI to the general public. From writing computer code to creating school essays, the chatbot presented a glimpse of what coming technologies could do.

As more and more companies adopt generative AI to streamline their business processes, support their customers, and transform their creative processes, including technological development, it becomes critical for CISOs as enterprise risk leaders to support the benefits while understanding, communicating, and managing potential AI risks.

If you ask CISOs about generative AI security, everyone will have a different perspective from their point in the journey. It’s a relatively new technology, and no one is an expert. All of us are on a combined journey of discovery and innovation. Let’s explore steps a CISO can take as they walk through their AI journey.

Learn. Learn. Learn.

CISOs admit they are not AI experts, with even the definition of AI often described in the context of each company’s needs. There are many AI concepts and terminologies that, as a CISO, you may not be familiar with. That’s okay. Look at it with an open mind. Understand that you must educate yourself on the technology to help your organization get the best out of it. Read books, listen to podcasts, watch videos, attend conferences, and expand your knowledge. Develop and step into learning and sharing teams across your organization. Your team members, non-technical colleagues, and early-tenured employees may know more than you think. Engage with them, and don’t shy away from asking questions. There will always be something new. Make sure you stay updated and relevant.

Make the most of AI.

Generative AI is everywhere, and new tools may enhance your organization’s leadership. AI-driven security systems are increasingly adept at supporting security operations teams in researching and responding to threats and potential cyber-attacks. AI systems can also help organizations ensure compliance with data protection regulations by supporting data auditing, consent management, and privacy impact assessments.​

Research the use of generative AI to develop security chatbots or development assistants for your colleagues and customers. Automate specific cybersecurity tasks and increase the time your team has to focus on complex and strategic programs. Leverage AI tools to optimize resources, defend against threats, and reduce costs.

Understand new technology has new risks, but old risks remain as well.

Along with numerous benefits, AI brings many new potential risks. Your company’s employees are likely using it without permission. The outcomes that AI systems generate may not be fully predictable or trustworthy. It is important for CISOs to develop a new approach to generative AI security. Stay curious about the use cases that add value, recognizing that new ideas are abundant. And recognize that traditional approaches to threat modeling and risk assessments may be based on traditional concepts where code doesn’t change by itself, and outcomes are deterministic.

With AI, it’s different. You may have to rethink the paradigm you spent 15-20 years developing. You need new controls. You need rigorous data governance and clarity on the data training and tuning of the AI systems you rely upon. However, like all risks, AI is about AI risk management and not eliminating all AI risks. Educate yourself about risk management for AI. Begin considering the essential practices. Engage as a business leader on the problem beyond security alone. Recognize that some questions are not easily answered at first. Engage with others working on the same problems.

More importantly, recognize that AI systems nearly always operate on traditional full-stack architectures, are often reliant on third-party service providers, and use significant data systems for training and tuning. The vast majority of this technology may be protected by traditional security controls and risk management methods. It is not necessary to wait for AI-specific controls to be fully developed to get started. CISOs have significant experience in selecting appropriate security controls, managing security systems, and measuring the maturity of their security systems. Use new AI initiatives as a catalyst to revalidate these traditional security measures. For example, use multi-factor authentication so that an attacker — human or machine — cannot exploit credentials on the infrastructure supporting your traditional and new AI systems.

Lastly, avoid the temptation to focus on only the new risk dimensions of AI with reduced emphasis on managing existing and valuable security capabilities applied to the AI architecture and system. Validate your existing controls applied to new AI systems and use cases.

Recognize and manage today’s largest AI risk.

As generative AI tools are easy to access, you may not even know to what extent your employees use them. Indiscriminate use of AI may put organizational data at risk. The resulting privacy risk to consumers and confidentiality risk to company data is the largest active AI risk for many organizations. Consider all the different ways your organization is using AI. Conduct AI risk awareness campaigns to educate your employees about threats to them and your company. Educate employees on appropriate opportunities to embrace AI. Ultimately, consider data protection and privacy risks and identify where you should ensure that internal and shared AI data is not being exposed.

Key takeaway

New technologies like AI are fascinating. Their applications and benefits are endless. But there are always risks involved, which is why CISOs remain important enterprise risk management leaders. Your organization needs generative AI to stay ahead of the competition. Whether generative AI security is different or not, focus on strategies to develop a robust cybersecurity program and use your existing program to begin managing the risk to AI systems today. Educate yourself, engage with your community, and prepare to make the most of AI while managing its risks.

HITRUST AI Assurance Program

HITRUST launched its AI Assurance Program, the first and only system focused on achieving and sharing cybersecurity control assurances for generative AI and other emerging AI applications. Read our strategy document and stay tuned for additional blog posts. Check out our recent press release, HITRUST Releases the Industry’s First AI Assurance Program.

<< Back to all Blog Posts Next Blog Post >>

Subscribe to get updates,
news, and industry information.

Chat

Chat Now

This is where you can start a live chat with a member of our team