The excitement and curiosity surrounding Artificial Intelligence (AI) have been rising since you may have first heard of it. AI offers promises and challenges in equal measures, and the pace of both is staggering. News outlets are buzzing with more questions than information about the future of AI and the impacts it will have on innovation, healthcare, jobs, and global economies. Just a month ago, titans of the technology industry, including Tesla’s Elon Musk, Meta’s Mark Zuckerberg, and Alphabet’s Sundar Pichai, met with 60 Congress members to talk about the need for AI regulation. They agreed the government should play a role as a “referee” in regulating AI. Are they successfully getting ahead of issues posed by AI, or are they already behind? Only time will tell.
The latest buzz in the AI world is Generative AI. How will it shape the future of AI? What promises and challenges does it bring? Let’s explore.
Promises of Generative AI
The introduction and speed of adoption of Generative AI technologies in a year have been astounding. Remember the launch of ChatGPT? Within five days, it attracted more than one million users.
Industry trends and government initiatives indicate they are doing their best to jump aboard a high-speed train at a record-breaking speed. Is it good? Is it bad? Will it improve our businesses and societies, or does it expose us to some unimaginable Generative AI cyber risks?
From an economic perspective, indicators are positive. Goldman Sachs Research predicts that Generative AI can raise global GDP by 7% over the next 10 years. Organizations are eager to transform their operations and boost productivity across business functions, like super-charging their sales and marketing with Generative AI customer relationship management (CRM) tools or developing software to analyze complex data quickly and identify opportunities to drive new products, solutions, and revenue streams.
When considering the impacts Generative AI can make in healthcare, it seems that the technology will affect our societies positively. Its ability to swiftly analyze massive amounts of data can lead to more accurate diagnostics and improved prediction of risks and diseases. Drug trials can be more efficient and effective, offering life-saving results.
CEO and Founder of Equum Medical, Dr. Corey Scurlock, pointed out that the potential for cost reduction and increased productivity is “the dream” of AI. However, he added that the overwhelming volume of data required by AI systems makes this “a dream deferred.” Yet, he remains hopeful about the “transformational” potential of Generative AI to improve healthcare and the overall quality of life.
Challenges of Generative AI
Hopes of a promising future challenge us by bringing in Generative AI cyber risks. Dr. Scurlock, the tech titans, and Congress members believe the time to implement guardrails and governance has arrived. Any new, disruptive technology introduces unknown risks. Does the breakneck pace of innovation with Generative AI mean greater risks? Probably, yes.
Generative AI risk management calls for enhanced governance and responsible actions. In their meeting with Congress, Zuckerberg mentioned that the US government is “ultimately responsible” for balancing and managing AI risks. Musk called for regulators to “ensure that companies take actions that are safe and in the interest of the general public.” But it will take more than these well-known tech leaders and a handful of Congress members to create the best path forward for businesses, governments, and societies to benefit from AI and Generative AI.
Organizations must be aware of Generative AI cyber risks. AI systems are complex and multi-faceted. There is much more to AI systems than meets the eye. Only a few organizations have the expertise, resources, or budget to manage it.
For effective Generative AI risk management, the systems on which the technology is consumed and delivered should be trustworthy. AI service providers should provide clear, objective, and understandable documentation of their risks and how those risks, including security, are managed. Working with a trustworthy partner who meets those requirements can help AI users inherit their capabilities and trustworthiness.
The best approach requires meaningful and enduring public and private partnerships, with participation from organizations across verticals. From startups to large corporations, everyone needs to join hands. To adequately quantify, measure, and manage risks, cloud service providers, cyber security solution providers, and assurance organizations must play a key role, too. Strategic partnerships and united efforts will help better the future of AI.
HITRUST AI Assurance Program
HITRUST launched its AI Assurance Program, which is the first and only system focused on achieving and sharing cybersecurity control assurances for Generative AI and other emerging AI applications. Read our strategy document and stay tuned for additional blog posts on this topic. Check out our recent press release, HITRUST Releases the Industry’s First AI Assurance Program.