AI is the hype all over again. Over the past few years, the media hype around AI has come and gone. Around half a decade ago, the media was bustling with imaginations promising The Jetsons-like AI future, where living with robots and holograms would soon become a reality.
The recent hype around AI is more grounded in reality. With ChatGPT performing work, like writing code or creating content, AI innovations are revolutionizing businesses across the globe.
As the podcast Trust vs. wrapped its first season, the final episode touched on the complex yet impactful topic of AI. Hosts Robert Booker, Chief Strategy Officer at HITRUST, and Jeremy Huval, Chief Innovation Officer at HITRUST, sought insights from different cybersecurity leaders and experts.
Here are some key takeaways from the episode Trust vs. AI.
Is AI good or bad?
AI is quickly bringing exciting changes to companies across different industries. In the healthcare space, it enables providers to lower costs, make quicker decisions, and improve the overall quality of care. But like any new technology, AI is a double-edged sword. It can be used in automated phishing activities or creating deepfakes.
So, is AI good or bad? The answer depends on the way it is used. As a cybersecurity professional, you need to think about how to use AI to maximize the good and minimize its risks.
What does the AI future look like?
Do you imagine a time in the future when AI would replace humans? In that scenario, after a few decades, you may think of AI running tests, diagnosing problems, and implementing solutions all on its own. Maybe we’ll get there eventually. Maybe we won’t. Either way, it will be a long journey that starts with eliminating inherent AI risks.
Today, AI can make lives more efficient and easier by increasing productivity. You can ask an AI tool to write emails or create your day’s plan. With the increase in the use of AI, companies using AI solutions can gain a competitive edge.
AI technologies can help you make informed decisions. For instance, Human Resources may use AI tools to analyze data, interview candidates, and find the right person for a position. However, some risks may be involved, including ethical concerns and biases. Businesspeople should use AI as a decision support system, not the decision maker.
What are some AI risks?
The hallmark of AI is that it learns and adapts. This means there are unique risks involved. A system may start behaving unexpectedly. Poisoning attacks may become common. Once you have reviewed the code and deployed a traditional application, you need not worry about it. That’s not the case with AI. The AI model can change every minute and demands constant monitoring.
Another significant AI risk is an increase in breaches. Attackers may use AI to analyze data and get access to confidential information. This means organizations must be more proactive in their AI risk management efforts and use the technology responsibly and securely.
How does the AI assurance landscape look?
Industry leaders are collaborating to create AI risk management guidelines. How do you use the technology effectively? How do you mitigate AI risks? What are some critical steps for organizations?
HITRUST is taking essential steps in AI assurance. It currently has a patent pending with the US Patent Office for using natural language processing and AI to help map an organization’s written policy documents to control requirements. Recently, it launched the HITRUST AI Assurance Program, the first and only system focused on achieving and sharing cybersecurity control assurances for generative AI and other emerging AI applications.
As AI innovation brings newer risks, businesses need appropriate controls for effective AI risk management. They also need AI assurance to ensure partner organizations are implementing those controls. Moreover, a reliable control system and assurance mechanism to create trust is critical.
To learn more about Trust vs. AI, tune into the podcast episode.
Did you miss any of the episodes? Find all the Season 1 episodes here.