- Tom Kellermann, VP of Cyber Risk, HITRUST
It starts innocently: a pilot that becomes production, a chatbot wired into calendars and CRMs, agents granted access to files and payments. Momentum outruns scrutiny. Teams assure themselves that nothing dangerous could happen to them. Meanwhile, the dark passenger, silent and invisible, settles in next to your AI stack and waits for a turn at the wheel.
Your model doesn’t have to be hacked to become dangerous. It only has to trust the wrong input. A single poisoned record, an indirect instruction, or one over-privileged connector is enough. The question isn’t whether attackers can manipulate AI. It’s whether you’ll notice before the damage is operational, financial, and public.
The problem we created: Connected autonomy, unchecked
The dark passenger is not theoretical. At Black Hat USA in August 2025, three researchers showed how a poisoned calendar invite could hijack a major AI model and flip connected lights, open smart shutters, and even turn on the boiler. Indirect prompt injection can steer an AI system you trust into actions you never intended.
AI systems don’t just answer anymore; they act. What started as a text predictor now reads calendars, invokes plug-ins, touches data lakes, and fires off actions without a person watching. Each new connector, tool, and permission widens the blast radius.
Two shifts amplified this risk.
- Implicit trust: Models and agents routinely treat enterprise sources, SaaS connectors, and user-provided content as safe by default.
- Invisible intermediaries: Helpful layers like SDKs, extensions, and RAG pipelines make it hard to see where an instruction originated or who approved the capability.
That’s the dark passenger’s comfort zone: riding inside “trusted” workflows where guardrails are assumed, not enforced. The danger isn’t abstract; it’s operational.
Significant AI threats and how to address them
Data and model poisoning
Poisoning happens when someone slips harmful information into the data your AI learns from (training) or looks up while answering (like internal knowledge bases). The AI trusts that data, which means it can learn the wrong thing or follow hidden instructions without anyone hacking the system directly. The results range from incorrect decisions to data leaks or backdoor behaviors that only trigger on certain cues.
What to do: Treat data like code: allow-list trusted sources, enforce provenance checks and signed artifacts for datasets/models, and sanitize inputs to strip hidden instructions. Restrict data that RAG can access and monitor for anomalies. If you don’t control the data your AI consumes, you don’t control the AI model’s behavior.
Broken access controls
IBM’s Cost of a Data Breach Report 2025 states that AI adoption is outpacing oversight. The study found that 97% of organizations that suffered an AI-related breach reported they lacked proper AI access controls. Tokens, service accounts, and agent permissions are frequently over-privileged, long-lived, and unbound to accountable humans. That’s how a single connector becomes an attacker’s Swiss Army knife.
What to do: Treat non-human identities like production users. Enforce least-privilege scopes for agents/tools, short-lived tokens, and human accountability for every action an AI can take. This is where incidents may happen. Tighten it first.
Supply chain compromise
Supply chain compromises, including compromised apps, APIs, and plug-ins, were the most common cause of AI security incidents (30%) as per the IBM study. If your agents trust third-party components, your risk surface is as big as the least-secure plug-in in the chain.
What to do: Manage plug-in and API versions. Require digital signatures. Verify dataset provenance. Restrict agent tool scopes and continuously monitor third-party connectors. Small faults in the supply chain can lead to big failures.
Why governance alone is not enough
Governance matters, but governance is not security. Standard frameworks like ISO/IEC 42001 and 23874 provide governance and risk guidance. But they do not address the novel threats inside AI layers. Governance without validated AI security controls leaves organizations exposed. Without enforced, validated controls and robust security measures, governance risks become paperwork while attackers exploit the AI stack.
Why you should care (now)
AI systems drive real operations. But a single poisoned record can trigger wire transfers; a hidden instruction can open a video call; a compromised plug-in can cause data leaks. Boards, customers, regulators, and insurers are no longer impressed by slide decks; they want provable security.
IBM’s numbers show that when AI goes wrong, the bill rises. The global average breach cost landed at $4.44M. If AI agents are making moves in your environment, you need controls that bind them, audit them, and fence what they can touch.
The urgent requirement: AI security and assurance
The real risk is inside the AI layers — the connectors, plug-ins, and permissions that carry novel threats, which no governance program or standard security framework fully covers. This is why AI-specific security and assurance are urgent. Vendors may claim their product “makes you secure,” but history shows that piecemeal solutions fail. What’s needed is holistic security, with validated controls against logical, proven benchmarks.
The dynamic blueprint: HITRUST
This is where HITRUST is relevant. Unlike standards that validate policies, HITRUST validates security. It’s not a new checklist. It’s a validated assurance program that adapts to threat reality and proves controls are operating in practice, not just on paper. HITRUST offers a dynamic security blueprint that strengthens cyber resilience.
HITRUST is cyber threat adaptive. It evolves continuously with intelligence data on attacker tradecraft, ensuring AI controls stay current and keep up with emerging threats. It built the AI Security Certification to secure deployed AI systems. The HITRUST AI Security Certification incorporates up to 44 AI-focused controls that are designed to address the AI attack surface. These controls are independently validated and integrate seamlessly with HITRUST’s proven assurance framework, ensuring that AI systems are not only governed but also secured against real threats.
The AI Security Certification adds to HITRUST’s core security offerings, e1, i1, or r2 for comprehensive resilience and broad cybersecurity coverage. Organizations can establish the baseline and extend it to secure their AI systems and achieve validated AI assurance.
Does HITRUST work?
HITRUST is the only assurance mechanism proven to reduce risk. 99.41% of HITRUST-certified environments were breach-free in 2024, per the Trust Report, evidence that validated, threat-adaptive control sets mitigate real-world risk.
On analyzing traditional breach drivers and AI-specific risks against HITRUST requirements, it’s evident that an e1 + AI Security Certification pairing covers the majority of the IBM breach concepts, such as phishing, compromised credentials, backups, and critically, AI access controls. Stepping up to an i1 + AI Security Certification strengthens coverage for insider threats and recovery plan testing, closing gaps that trip up most programs.
Why trust HITRUST (especially for AI)
HITRUST doesn’t just define controls — it validates they’re operating, harmonizes them across frameworks, and updates them as attacker tradecraft shifts. It scales: start with e1 + AI Security Certification to clamp the biggest risks, uplift to i1 + AI Security Certification for enhanced data protection, and get an r2 + AI Security Certification for the most robust security approach.
Only HITRUST delivers AI assurance grounded in proven methodology, continuous threat adaptation, and independent validation. That’s how you turn AI from a reputational liability into a competitive advantage, backed by evidence.
Conclusion
“The greatest trick the devil ever pulled was convincing the world that he doesn’t exist.” - Charles Baudelaire
Your AI’s dark passenger is very real. Recognize it. Govern it. And most importantly, secure it with proven controls before it takes the wheel.