HITRUST AI Security Assessment and Certification Specification
Content Disclaimers
START HERE
About the HITRUST AI Security Certification
What problem does this certification help address?
Why HITRUST for AI assurances?
Who can obtain this certification?
What types of AI can certify?
What is this assessment and certification… not?
Which AI system layers are considered?
Shared AI Responsibilities and Inheritance
Is this a stand-alone assessment?
How to tailor the HITRUST assessment?
Guidance for External Assessors
How big / how many requirements?
AI security requirements included
TOPIC: AI security threat management
Identifying security threats to the AI system (What can go wrong and where?)
Threat modeling (What are we doing about it?)
Security evaluations such as AI red teaming (Are the countermeasures working?)
TOPIC: AI security governance and oversight
Assign Roles and Resp. for AI
Augment written policies to address AI specificities
Humans can intervene if needed
TOPIC: Development of AI software
Provide AI security training to AI builders and deployers
Version control of AI assets
Inspection of AI software assets
Change control over AI models
Change control over language model tools
Documentation of AI specifics during system design and development
Linkage between dataset, model, and pipeline config
Verification of origin and integrity of AI assets
TOPIC: AI legal and compliance
ID and evaluate compliance & legal obligations for AI system development and deployment
ID and evaluate any constraints on data used for AI
TOPIC: AI supply chain
Due diligence review of AI providers
Review the model card of models used by the AI system
AI security requirements communicated to AI providers
TOPIC: Model robustness
Data minimization or anonymization
Limit output specificity and precision
Additional Training Data Measures
TOPIC: Access to the AI system
Limit the release of technical info about the AI system
Model Rate Limiting / Throttling
GenAI model least privilege
Restrict access to data used for AI
Restrict access to AI models
Restrict access to interact with the AI model
Restrict access to the AI engineering environment and AI code
TOPIC: Encryption of AI assets
Encrypt traffic to and from the AI model
Encrypt AI assets at rest
TOPIC: AI system logging and monitoring
Log AI system inputs and outputs
Monitor AI system inputs and outputs
Monitoring for data, models, and configs for suspicious changes
TOPIC: Documenting and inventorying AI systems
Inventory deployed AI systems
Maintain a catalog of trusted data sources for AI
AI data and data supply inventory
Model card publication (for model builders)
TOPIC: Filtering and sanitizing AI data, inputs, and outputs
Dataset sanitization
Input filtering
Output encoding
Output filtering
TOPIC: Resilience of the AI system
Updating incident response for AI specifics
Backing up AI system assets
AI security threats considered
TOPIC: Availability attacks
Denial of AI service
TOPIC: Input-based attacks
Evasion (including adversarial examples)
Model extraction and theft
Model inversion
Prompt injection
TOPIC: Poisoning attacks
Data poisoning
Model poisoning
TOPIC: Supply chain attacks
Compromised 3rd-party models or code
Compromised 3rd-party training datasets
TOPIC: Threats inherent to language models
Confabulation
Excessive agency
Sensitive information disclosed in output
Harmful code generation
Crosswalks to other sources of AI guidance
ISO/IEC 23894:2023
ISO/IEC 42001:2023
AI updates to HITRUST’s glossary
Added terms
Added acronyms
Previous
Next
Content Disclaimers

Portions of this document are proprietary and subject to copyright and intellectual property protection.

  • The HITRUST CSF is owned by HITRUST, and any use of the HITRUST CSF, or components contained therein, are expressly prohibited.
  • Where used with or without marks, HITRUST®, HITRUST CSF®, and MyCSF® are registered marks, and HITRUST Assurance Program™ is a registered trademark.
  • This document also contains section numbers and section titles from various authoritative sources dealing with AI security, all of which are attributed to the authoring organization. These have been provided to assist reviewers in locating the associated content within these documents (as hyperlinks to the content within the document, and as breadcrumb trails if not), for reference purposes only. This content is not HITRUST’s intellectual property or copyrighted by HITRUST. Other trademarks or copyrighted materials in this document are the property of their respective owners.
START HERE
If you are interested in… Where to go
An overview of this new certification Go here
The 44 AI security requirements themselves Go here
The assessment process, and assessment scoring Go here to visit the HITRUST Assessment Handbook
The AI-focused security threats considered Go here
The HITRUST AI glossary Go here
Crosswalks to other sources of AI guidance Go here
 
 
About the HITRUST AI Security Certification

This HITRUST AI Security Certification is a key component of HITRUST’s larger AI Assurance Program.

Key aspects of this assessment and certification at a glance:

No. Consideration Answer at-a-glance
1 What problem does this certification help solve? Enables organizations to demonstrate that they sufficiently mitigate cybersecurity threats to the AI technologies they have deployed. The focus of this certification is security for AI systems.
2 How is this a problem that HITRUST can help solve? HITRUST has all components needed to enable IT cybersecurity assurances correctly, consistently, and at scale. It’s what we’ve been building for the last 17 years. These are the same building blocks needed to enable AI assurances to the same degree of quality and reliability.
3 Who can achieve this certification? Providers of AI systems, including AI Application Providers and AI Platform Providers. This certification is not for organizations simply using AI systems deployed by others (in the same way a SaaS user organization can’t get a HITRUST r2 certification on behalf of its service provider).
4 What is this certification… not? While cybersecurity of the AI system is absolutely a key risk that must be understood and addressed, it is not the only risk introduced when AI is deployed. Organizations who achieve this certification will still need to navigate the additional risk areas in the Responsible AI landscape (such as AI privacy, ethics, and transparency).
5 What types of AI models qualify for certification? Generative AI, predictive AI (i.e., non-generative machine learning), and even the older rule-based AI (i.e., expert systems).
6 Which AI system layers are considered? It focuses on the added IT components unique to AI (e.g., the model, the AI platform, and any specialized AI compute infrastructure in use) in addition the overall IT platform components normally scoped into a HITRUST assessment.
7 What are the AI security requirements needed for certification? Up to 44 AI security-specific HITRUST CSF requirements, depending on how the assessment is tailored.
8 Which AI security threats are considered? 13 threats. Some of the threats in this document’s AI security threat register are novel (e.g., prompt injection), and others are well-known security threats that are exacerbated by the deployment of AI.
9 How long is the certification valid? Matches that of the underlying HITRUST CSF assessment. Meaning, it is valid for 1 year if attached to an e1 or i1 assessment and for 2 years if attached to an r2 assessment.
What problem does this certification help address?

The new HITRUST AI Security Certification proactively addresses questions and concerns over AI security within deployed AI systems, which continue to mount in the third-party risk management space.

AI changes the cybersecurity threat landscape

Using any new technology brings about new inherent risks…in the case of AI, maybe more so. While AI presents opportunities, it also introduces unique risks and compliance challenges that demand attention. Excitement about AI, like all new systems, has the potential to relegate critical security and assurance considerations to afterthoughts. Managing the security risks of AI systems is critical, as failing to do so can have severe consequences.

How are AI systems similar to what we already know?

In many respects, systems leveraging AI models are similar to the IT systems we’ve been deploying for years. Both run on familiar infrastructure and services with known risks and security patterns. For both, security of the infrastructure provides the needed foundation. Data access, data governance, and data control yield outsized benefits for both, and in both the greatest risk follows the sensitive data. Because AI systems are still IT systems, organizations need to apply conventional IT security controls to these systems.

How are AI systems different, from a security perspective?

The deployment of AI imposes novel security threats while exacerbating others, requiring additional cybersecurity measures that are not comprehensively addressed by current risk frameworks and approaches (including the HITRUST CSF up to this point). Compared to traditional software, AI-specific security risks that are new or increased include the following:

  • Issues in the data used to train AI models can bring about unwanted outcomes, as intentional or unintentional changes to AI training data has the potential to fundamentally alter AI system performance.
  • AI models and their associated configurations (such as the metaprompt) are a high-value target to attackers who are discovering new and difficult to detect approaches to breach AI systems.
  • Modern AI deployments rely on third party service providers to an even greater degree, making supply chain risks such as software and data supply chain poisoning a very real threat.
  • AI systems may require more frequent maintenance and triggers for conducting corrective maintenance due to changes in the threat landscape and data, model, or concept drift.

Further, generative AI systems face even more unique security threats, including the following:

  • GenAI systems have random and unexpected output by design, which is often completely inaccurate. Pairing this with the confidence in which genAI systems communicate their responses leads to a heightened risk of overreliance on inaccurate output.
  • The randomness of genAI system outputs challenges long-standing approaches to software quality assurance testing.
  • Because the foundational models that underpin genAI are commonly trained on data sourced from the open Internet, these systems may produce output which is offensive or fundamentally similar to copyrighted works.
  • Because genAI models are often tuned or augmented with sensitive information, genAI output has the potential to inappropriately disclose this sensitive data to users of the AI system.
  • Organizations are actively giving genAI systems access to additional capabilities through language model tools such as agents and plugins, which (a) may give rise to excessive agency and (b) extends the system’s security and compliance boundary.

How is HITRUST helping address this need?

Organizations must address the cybersecurity of AI systems by (1) extending existing IT security practices and (2) proactively addressing AI security specificities through new IT security practices. The HITRUST AI Cybersecurity Certification equips organizations to do this effectively through providing prescriptive and relevant AI security controls, a means to assess those controls, and reliable reporting that can be shared with internal and external stakeholders.

Why HITRUST for AI assurances?

As a longstanding leader in information security and cybersecurity risk management with 17 years of practical experience and demonstrable results, HITRUST’s role as an IT certification body, assessor accreditation body, and information protection standards harmonization body uniquely positions us to help address the growing need for AI security assurances.

We commend the efforts of global standards development organizations such as ENISA, NIST, and ISO; non-profit foundations such as the OWASP Foundation; and governments across the globe who work tirelessly to provide guidance on wielding artificial intelligence technologies responsibly. However, a key missing element is a robust assurance framework to ensure the relevance, implementation, operation, and maturity of the guardrails and safeguards needed to mitigate AI security threats. The technology industry is not lacking in AI standards but is lacking in a means to ensure that there are appropriate and reliable mechanisms to define, measure, and report on their implementation and effectiveness.

A reliable AI assurance solution must ensure that relevant controls embodied in AI standards and applicable regulations are implemented and operating properly to deliver effective risk mitigation. Relevant frameworks with reliable assurances provide confidence and transparency that the appropriate controls are implemented and operating effectively.

Specifically, consider the following key points:

  1. Strong assurance program: Proven assurance programs should be leveraged to validate that controls are not only in place to mitigate risks and threats introduced through the adoption of AI, but also that these controls are effective and operationally mature. In addition, assurance systems must be transparent, scalable, consistent, accurate, and efficient— essential for trust and integrity.

  2. Continuous improvement: Standards and frameworks must be kept relevant to the rapidly evolving AI risk and threat landscape. Regulatory standards updated infrequently are insufficient to maintain pace with AI; thus, using approaches that adapt actively as threats evolve is essential.

  3. Measurable outcomes: The industry needs a consistent, transparent, and accurate method to measure and benchmark the effectiveness of AI controls. As the adage goes, “if it is important, you need to measure it.” This allows for continuous optimization and better risk management.

  4. Support for control selection for different AI deployment scenarios: All entities deploying AI models into new and existing IT platforms susceptible to novel cyber-attacks unique to AI. The industry therefore needs an approach that begins with a specific set of good security hygiene and/or best/leading AI security practices applicable to all organizations. Tailoring of additional control selections on top of those practices allows support for additional requirements and outcomes based on inherent risk and the approach needed to provide cyber security and resiliency for different types of AI models and the exciting capabilities they unlock. This model works because the eventual set of requirements are all backed and validated by the same transparent, consistent, accurate and efficient assurance system. This approach permits regulatory consistency without the ‘one size fits all’ approach that is inherently suboptimal due to differences in organizational complexity and maturity.

  5. Support for diversity through inheritance and shared responsibility: Smaller organizations, such as health systems supporting rural or underserved communities, need the same cybersecurity as larger organizations with more resources. As small and large organizations heavily rely on cloud service providers for technology and cybersecurity needs, the use of such systems can accelerate cybersecurity capability adoption for their customers—today, 85% of the requirements for a HITRUST assessment may be inheritable by health industry companies from a HITRUST certified cloud service provider, such as Amazon AWS, Microsoft Azure, or Google Cloud. Making robust AI cybersecurity capabilities available to all organizations deploying AI systems increases efficiency and reduces cost while streamlining security compliance.

  6. Protecting the IT infrastructure enabling AI capabilities: In addition to addressing the cybersecurity threats specific to AI, appropriate security controls across the entire technology stack are necessary to deliver AI capabilities in secure manner. The world’s largest cloud and AI service providers have already demonstrated their commitment to foundational IT assurances through achieving HITRUST r2 certifications scoped to include their AI computing infrastructure and AI PaaS platforms. HITRUST continues to actively collaborate with these industry leaders on AI risk management and security requirements, including an AI Assurance Program built on our proven assurance model, and including shared responsibility and inheritance of security controls available from leading AI service providers.

  7. Risk management, not absolute security: It is critical to shift the culture and mindset from seeking absolute security to managing risks. This involves applying relevant controls and using reliable assurance methodologies to reduce risks to acceptable levels, with remaining residual risks covered by cyber insurance. Regulation and policy making based on data-driven evidence of control implementation provided by assurance systems can enable powerful incentives for regulated entities that confidently demonstrate the maturity of their cybersecurity system in a provable manner.

We know that the approach outlined in our recommendations can be effective as demonstrated and documented in HITRUST’s latest Trust Report of our current certifications, which includes organizations of varying sizes in many industries, did not report a breach over the past two-year period while operating in one of the most aggressive cyber-attack environments in history. This is a testament to the significance of relevant controls and a strong assurance program—one that ensures that the appropriate security controls are validated through reliable testing to earn objective certification. The HITRUST framework is continually updated to address the evolving threat landscape and ensures that organizations can implement and maintain controls that are effective in mitigating AI risk and updated in response to the changing AI threat landscape.

The standards, frameworks, and guidance to identify and mitigate novel risks and threats specific to AI effectively continue to emerge and mature. What is needed in the AI security space is an assurance approach that is proven effective. HITRUST has long championed concepts and implemented solutions for cyber threat adaptive control and assurance frameworks to support comprehensive information risk management, emphasizing the implementation of relevant controls backed by proven and measurable operational maturity of sufficient strength. As discussed above, a proactive and proven approach to AI security assurance is essential.

Early adopters of emerging technologies will continue to be frequent targets of criminals and nation states until we implement approaches that make information security validation and assurance an inherent part technological innovation and new system design. Compliance motivations alone do not solve the problem, as the speed of changing cyber threats outpaces compliance systems. Only a proactive, threat-adaptive approach can ensure that relevant controls are in place and operating before entities are attacked.

We urge cybersecurity leaders to consider these points as they look to enhance the cybersecurity posture of new and considered AI deployments. HITRUST stands ready to support these efforts and to work with you to respond with urgency to the AI cybersecurity and risk management challenge we collectively face. We look forward to continuing our dialogue and working together to strengthen our initial assessment and certification in this important area.

Who can obtain this certification?

The HITRUST AI Security Certification can be achieved by AI Providers (including AI Application Providers as well as AI Platform Providers). Said another way, this is for organizations who deploy AI technologies.

AI Personas

ISO/IEC 22989:2022 provides a helpful list of personas within the AI space that we find helpful. The table below briefly describes a subset of these AI personas, with an indication of whether the persona can achieve the HITRUST AI Security Certification.

AI Persona Description Can they achieve this certification?
AI providers An AI provider is an organization or entity that provides products or services that uses one or more AI systems.

Encompasses:
  • AI platform providers: Provides services that enable other organization to deliver AI-enabled products or services.
  • AI product providers: Provides AI-enabled products directly usable by end-user / end-customer. Also referred to as AI application providers throughout this document.
Yes
AI developers Concerned with the development of AI services and products (e.g., model designers, model verifiers). No. The provider of an AI application and/or AI platform instantiates what an AI developer builds and can achieve this certification, but the software development function cannot. HITRUST does not certify “built but not installed” software.
AI customers / users Users of an AI product or service. No. Analogy: A SaaS user organization cannot achieve a HITRUST certification over the SaaS product (instead, the SaaS provider can).
AI partners Provide products and/or services in the context of AI (e.g., datasets, technical development services, evaluation / assessment services). No

Example scenarios

Here are some example scenarios to help illustrate when a business is considered an AI Provider for the purposes of determining whether they qualify for this HITRUST certification:

No. Scenario Are they an AI provider?
1 Customer Satisfaction Analysis Platform: A software vendor provides a customer satisfaction aggregation tool that consumes online reviews and creates a multi-page document in a structured format using OpenAI’s pre-trained GPT4o model. The company performs no fine-tuning and does not reference OpenAI in their marketing content and holds themselves out as an AI-driven company selling AI-powered products. Yes. Even though they did not train the model or even fine-tune it, they have integrated an AI model into their technology platform. In this scenario, the organization would likely want to inherit several controls (e.g., security of the model’s training data) from OpenAI.
2 Online Job Posting Board: A provider of an online job posting board has a policy that all resumes are to be sent to an LLM for first level consideration before they are sent to hiring companies. The provider maintains processes and procedures using a set of standard prompts that instruct the LLM how to determine if the candidate should proceed. This work is performed manually by a team of level 1 technicians who are following these processes and procedures and using the web-based UI provided by their LLM provider. There is no source code for the job posting board related to AI in any way, and the online job posting board does not make any API calls to the LLM used to screen reumes. Instead, a fully manual process exists whereby a team of people manually uploads resumes to the LLM along with the standard prompts instructing the LLM on what to screen the resumes for. No. The provider of the online job posting board has an AI-enabled process but not an AI-enabled IT system. Instead, they use an AI system another organization deployed and administers (likely a genAI chatbot). As such, they could not seek to obtain an AI security certification from HITRUST for their provider’s system. The key differentiator between this example and the prior one is the absence of system integration with the LLM. In this example, the organization’s end-users directly interact with an LLM while in the previous example the organization’s system interfaces with an LLM.
3 Social Media Platform: A social media company uses an ML model to determine if users’ posts violate their terms of service and content guidelines. The users’ posts are routed in near-real-time to a self-trained ML model that exists within the social media platform’s technology stack. Users of the social media platform are not aware of this system. Yes. In this case, the social media company is both a model creator and the deployer of the model.
4 Loan Origination Company: A custom loan origination company uses an expert system to provide recommendations on loan application approval to a mortgage loan officer. Ultimately, a human makes the final decision. Yes. The expert system uses a heuristic (i.e., knowledge-driven instead of data-driven) model, and this model is part of the loan origination system deployed by the loan origination company.
What types of AI can certify?

Deployed applications leveraging any or all of the following (very) broad types of AI can be included in the scope of the HITRUST AI Security Certification:

AI type Also known as Description
Rule-based AI heuristic models, traditional AI, expert systems, symbolic AI, classical AI Rule-based AI systems rely on expert software written using rules. These systems employ human expertise in solving complex problems by reasoning with knowledge. Instead of procedural code, knowledge is expressed using If-Then/Else rules.
Predictive AI PredAI, non-generative machine learning models These are traditional, structured data machine learning models used to make inferences such as predictions or classifications, typically trained on an organization’s enterprise tabular data. These models extract insights from historical data to make accurate predictions about the most likely upcoming event, result or trend. In this context, a prediction does not necessarily refer to predicting something in the future. Predictions can refer to various kinds of data analysis applied to new data or historical data.
Generative AI GenAI, GAI Generative AI (gen AI) is artificial intelligence that responds to a user’s prompt or request with generated original content, such as audio, images, software code, text or video. Most generative AI models start with a foundation model, a type of deep learning model that “learns” to generate statistically probable outputs when prompted. Large language models (LLMs) and small language models (SLMs) are common foundation models for text generation, but other foundation models exist for different types of content generation.

 

The table above is intentionally broad so as to encompass a wide variety of AI solutions. For the purposes of this certification, examples of AI systems include anything from an LLM, to a linear regression function, to a carefully curated rule-based inference engine.

Further, the assessment considers security issues brought about by implementing popular generative AI development patterns, including the use of:

  • embeddings
  • language model tools such as agents and plugins
  • retrieval augmented generation (RAG)

NOTE: HITRUST will not award the HITRUST AI Security Certification to any AI deployments categorized as unacceptable or are otherwise banned by applicable AI regulation in the jurisdiction of the assessed entity.

What is this assessment and certification… not?

Not inclusive of risks introduced by simply using AI

Behaviors of end-users of AI technologies can intentionally or unintentionally lead to security incidents, just like in traditional IT. When members of the organization’s workforce leverage AI in the execution of their duties, controls in the AI usage layer (e.g., training, acceptable use policies) should be implemented to help ensure that AI is being used appropriately. HITRUST CSF requirements to address AI usage risks are being added to v12 of the HITRUST CSF (ETA H2 2025) so that they can be potentially included in all HITRUST assessments (not just assessments performed by AI system deployers).

Focused on a key AI risk (security), but not every AI risk

This certification focuses on mitigating the AI security threats that make up the cybersecurity risk that accompanies the deployment of AI within an organization. Cybersecurity risk is one of many risks discussed in AI risk management frameworks like the NIST AI RMF and ISO/IEC 23894:2023. AI risks that are peers to cybersecurity include those dealing with AI ethics (such as fairness and avoidance of detrimental bias), AI privacy (such as consent for using data to train AI models), and AI safety (i.e., ensuring the AI system does not harm individuals). HITRUST’s AI Risk Management Assessment and Insights Report is designed to help organizations assess and report on the larger AI risk management problem.

Focused on a key part of Trustworthy and Responsible AI, but not all of it

Trustworthy and Responsible AI is a collection of principles that help guide the creation, deployment and use of AI, considering the broader societal impact of AI systems. Pillars of Trustworthy and Responsible AI include explainability, predictability, bias and fairness, safety, transparency, privacy, inclusiveness, accountability… and security. This assessment and certification aim to help organizations deploying AI nail the security pillar, and prove that they’ve done so in a reliable and consistent way.

A complement to, not a complete compliance assessment for, the EU AI Act

The following is not intended to provide legal advice.

The aim of the EU AI Act is to foster trustworthy AI in Europe and beyond, by ensuring that AI systems respect fundamental rights, safety, and ethical principles and by addressing risks of very powerful and impactful AI models.

The majority of the EU AI Act’s obligations fall on high risk AI systems, which are subject to strict obligations before they can be put on the market, including:

  • adequate risk assessment and mitigation systems
  • high quality of the datasets feeding the system to minimize risks and discriminatory outcomes
  • logging of activity to ensure traceability of results
  • detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance
  • clear and adequate information to the deployer
  • appropriate human oversight measures to minimize risk
  • a high level of robustness, security and accuracy

Some, but not all of these obligations are security related. For AI security risks specifically, most of these obligations are touched upon to some degree within the HITRUST AI Security Certification assessment. However, this isn’t the case for the other focus areas of the EU AI Act (including respect fundamental rights, safety, and ethics).

It is also important to note that HITRUST AI Security Certification is being released before the EU AI Act’s granular security guidance (ETA H2 2025). HITRUST has taken care to ensure close alignment with the EU AI Act’s security-specific expectations as is possible in their absence. HITRUST will continue to monitor the development of additional EU AI Act security requirements and adjust this assessment and certification accordingly if needed.

*Check out this episode of HITRUST’s Trust Vs. podcast for more dialog on this point.

A complement to, not a replacement for, ISO/IEC 42001:2023

See this page for discussion on how this assessment and certification pairs well with ISO/IEC 42001:2023.

Not exhaustive

Although this assessment and certification intends to support organizations in demonstrating the strength of cybersecurity protections for deployed AI systems, it is not exhaustive and does not cover every use case or obligation given the rapidly changing AI technical, legal, and regulatory environment. While using this assessment and certification, organizations should extend cybersecurity, governance and risk management practices beyond the scope of these requirements as needed for their use case or jurisdiction.

Which AI system layers are considered?

See this document’s glossary for definitions of AI specific terms used in this page

AI systems are made up of the application leveraging an AI model and very often an AI platform-as-a-service provider who delivers the AI model itself. Additional service providers, such as data brokers, data scientists, and architects support the model and data pipelines. The context of the overall system through which AI capabilities are delivered and consumed is critical to understand.

Consistent with the approach taken in the Microsoft Shared AI Responsibility Model as well as in the book Guardians of AI (R. Diver, 2024), we found it helpful to think of an AI system in terms of the following three layers. This approach can generally be applied to any AI scenario regardless of model type (generative, predictive, rules-based).

AI System Layer Description Security Considerations Addressed in this certification?
AI Usage Layer This is where the end-user interacts with the AI application. This is where the AI capabilities are consumed. Behaviors of end-users of AI technologies can intentionally or unintentionally lead to security incidents, just like in traditional IT. When members of the organization’s workforce leverage AI in the execution of their duties, controls in the AI usage layer (e.g., training, acceptable use policies) should be implemented to help ensure that AI is being used appropriately. No. HITRUST CSF requirements to address AI usage risks are being added to v12 of the HITRUST CSF (ETA H2 2025) for consideration in all HITRUST assessments.
AI Application Layer The AI application provides the AI service or interface that is consumed by the end-user. This layer could be as simple as a command line interface interacting with an AI service provider’s API or as complex as a full-featured web application. This is where the end-user’s inputs that will be passed to the AI model are captured, and this is also where the AI model’s response is displayed to the user. Techniques used to ground AI outputs (e.g., RAG) or extend AI capabilities (e.g., using language model tools such as agents and plugins) occur at this layer. This layer includes several key security controls that the AI application provider is either fully or partially responsible for (e.g., AI application safety and security system such as input filters). Yes
AI Platform Layer This layer provides AI capabilities to AI applications. In this layer the AI model is served to the AI application, typically through APIs. In addition to the trained model and model-serving infrastructure, this layer also includes the AI engineering tools used to create and deploy the model, any model tuning performed by the AI platform provider, and any specialized AI compute infrastructure leveraged. Depending on the AI system architecture and AI model used, the AI platform provider may be responsible for several key AI cybersecurity controls residing in this layer (e.g., model safety systems such as output filters implemented by the AI platform provider). Also residing in this layer are the controls performed during model creation and tuning (e.g., dataset sanitization). Yes

 

NOTE: The underlying IT platform and infrastructure that comprises the IT system using the AI model must be scoped into the underlying HITRUST e1, i1, or r2 assessment that the HITRUST AI Security Certification assessment is attached to. In other words, these layers are additive to the IT technology layers typically included in the scope of a HITRUST assessment.

Shared AI Responsibilities and Inheritance

See section 12.2 of the HITRUST assessment handbook to learn more about inheritance

Shared responsibility for… AI?

Risk management, security and assurance for AI systems is only possible if the multiple organizations contributing to the system share responsibility for identifying the risks to the system, managing those risks, and measuring the maturity of controls and safeguards.

AI systems are made up of the application leveraging an AI model and very often an AI platform-as-a-service provider who delivers the AI model itself. Additional service providers, such as data brokers, data scientists, and architects support the model and data pipelines. The context of the overall system through which AI capabilities are delivered and consumed is critical to understand. Also critical is the benefit of partnering with high-quality AI service providers that provide clear, objective, and understandable documentation of their AI risks and how those risks, including security, are managed in their platforms.

The rapid pace of AI adoption requires industry leadership to deliver security assurances that scale and bring stakeholders together to demonstrate that the overall, combined AI system can be trusted. HITRUST has years of experience bringing leaders across the private sector together to focus on practical shared responsibility based upon an inheritable control framework proven daily in security compliance and cloud computing. Shared AI assurances between stakeholders are essential to maintaining trust in AI systems based on proven, practical, and achievable approaches. AI systems must be designed, implemented and managed in a secure, trustworthy manner.

How HITRUST helps

Through the HITRUST AI Security Certification, HITRUST is extending our proven Shared Responsibility and Inheritance Program to support the needs of organizations adopting and deploying AI technologies. We’re helping simplify the challenge of shared AI responsibilities by bringing together the following:

Inheritability across HITRUST’s AI security requirements

The ability to inherit validation results of AI security requirements is critical to enabling meaningful AI cybersecurity assurances. Several key AI cybersecurity must be performed prior to the actual deployment of the AI application (such during “training time”—when the AI model is being created), and several others are enforced at the AI platform player and therefore are the responsibility of AI platform providers.

Each AI security requirement in this document has been assigned an “inheritability” value. Inheritability and accompanying rationale are shown in the Additional information area of each requirement’s page. Consistent with the approach used for inheriting relevant HITRUST assessment results from cloud service providers, these AI security requirements are either not inheritable, partially inheritable, or fully inheritable from the organization’s AI service provider (e.g., an AI platform-as-a-service provider). Requisite: The organization’s AI service provider must participate in the HITRUST external inheritance program and must have the AI security requirements in an externally inheritable assessment. These inheritability values will be reflected in the HITRUST Shared Responsibility Matrix Baseline for CSF v11.4.0 and later.

Inheritability across these 44 AI security requirements is as follows. The take-away: As the HITRUST AI Security Certification is added into the HITRUST assessments performed by CSPs and AI service providers, over half (57%) will be at least partially inheritable.

  • Partially inheritable: 21 / 44
  • Fully inheritable: 4 / 44

Approach to assigning inheritability of AI security requirements

To assist in assigning inheritability values to AI security requirements, each AI security requirement was categorized into one of the following “AI SRM Types”:

AI SRM Type Rationale Example(s)
Not inheritable
AI.NI.a Implementing and/or configuring the requirement is the AI application provider’s sole responsibility. Designing the AI application such that humans can intervene if needed
AI.NI.b The AI application provider and its AI service providers are responsible for independently performing the requirement outside of the AI system’s technology stack. In other words, it is a dual responsibility. Assigning roles and responsibilities for the organization’s deployed AI systems
AI.NI.c The AI application provider and its AI service providers are responsible for jointly performing the requirement outside of the AI system’s technology stack (e.g., through a jointly executed agreement / contract). In other words, it is a joint responsibility. Contractually agreeing on AI security requirements
Partially inheritable
AI.PI.a Performing the requirement may be a responsibility shared between an AI application provider and their AI platform provider, performed independently on separate layers/components of the overall AI system. Logging AI system inputs and outputs
Fully inheritable
AI.FI.a The requirement may be the sole responsibility of the AI model creator. Or, depending on the AI system’s architecture, only evaluative elements that are the sole responsibility of the AI model creator apply. Increasing model robustness by taking additional measures against the training data
AI.FI.b The requirement may be the sole responsibility of the AI platform provider. Or, depending on the AI system’s architecture, only evaluative elements that are the sole responsibility of the AI platform provider apply. Restricting access to AI models
AI.FI.c The requirement may be the sole responsibility of the AI platform provider and/or AI model creator. Or, depending on the AI system’s architecture, only evaluative elements that are the sole responsibility of the AI platform provider and/or AI model creator apply. Training, tuning, and RAG data minimization
Is this a stand-alone assessment?

No, the assessment leading to the HITRUST AI Security Certification will not be performed as a stand-alone assessment. Instead, it is combined with a HITRUST CSF e1, i1, or r2 certification.

Why?

  • Meaningful assurances over AI security cannot be reached without also considering the cybersecurity of the supporting technology layers used to deliver AI capabilities (e.g., the application leveraging the AI model, the cloud services used to deliver that application, the data center that those cloud services reside in).

  • Because AI-specific cybersecurity threats are additive to the traditional cybersecurity threats faced by the overall IT system, the assessment leading to AI security certification should also be additive to the cybersecurity assessment of the overall IT system.
How to tailor the HITRUST assessment?

If you are unfamiliar with the concept of tailoring HITRUST CSF assessments, please read this page of the HITRUST assessment handbook.

In v11.4.0 of the HITRUST CSF and later, a new HITRUST Security for AI Systems compliance factor will be made available. This factor can be optionally added to HITRUST e1, i1, and r2 readiness and validated assessments which include within the scope of the assessment an IT platform that leverages an AI model.

When the HITRUST Security for AI Systems compliance factor is added to a HITRUST assessment, three additional tailoring questions will be asked:

The following table includes information about each potential response to question 1. These will be presented as checkboxes in MyCSF (not as radio buttons), allowing many to be selected for a single assessment (as is needed if the assessment’s in-scope IT platforms leverage more than one type of AI model).

Factor / response Description Examples and behavior considerations Impact on the assessment
Rule-based AI model (aka “heuristic models”, “traditional AI, “expert systems”, “symbolic AI” or “classical AI”) Rule-based systems rely on expert software written using rules. These systems employ human expertise in solving complex problems by reasoning with knowledge. Instead of procedural code, knowledge is expressed using If-Then/Else rules. HITRUST AIE, AML systems for financial institutions, prescription dosing calculators Adds 27 “base AI security” HITRUST CSF requirement statements
Predictive AI model (i.e., a non-generative machine learning model) These are traditional, structured data machine learning models used to make inferences such as predictions or classifications, typically trained on an organization’s enterprise tabular data. These models extract insights from historical data to make accurate predictions about the most likely upcoming event, result or trend. In this context, a prediction is does not necessarily refer to predicting something in the future. Predictions can refer to various kinds of data analysis or production applied to new data or historical data (including translating text, creating synthetic images or diagnosing a previous power failure). scikit-learn, XGBoost, PyTorch and Hugging Face transformer models Adds 27 “base AI security” requirements
+ 9 additional requirements
= 36 added requirements
Generative AI model (through a foundation model) Generative AI (gen AI) is artificial intelligence that responds to a user’s prompt or request with generated original content, such as audio, images, software code, text or video. Most generative AI models start with a foundation model, a type of deep learning model that “learns” to generate statistically probable outputs when prompted. Large language models (LLMs) and small language models (SLMs) are common foundation models for text generation, but other foundation models exist for different types of content generation. OpenAI ChatGPT, Anthropic Claude, Meta LLAMA, Google Gemma, Amazon Titan, Microsoft Phi Adds 27 “base AI security” requirements
+ 9 additional requirements associated with predAI
+ 5 additional Gen-AI only requirements
= 41 added requirements

The following table describes question 2 and 3. These will be presented as radio buttons in MyCSF, allowing only one answer. If the assessment’s in-scope IT platforms leverage many models, assessed entities should use a high-water mark approach to answering these two questions. For example, if one in-scope model is open source and the other model is closed source and confidential to the organization, question 3 should be answered affirmatively.

Question no. Question Description Impact on the assessment
2 Was confidential and/or covered data used to train the model, tune the model, or enhance the model’s prompts via RAG? AI models often require large volumes of data to train and tune. This data, as well as the sources of organizational data used for prompt enhancement through retrieval augmented generation (RAG), is very often confidential and/or covered information. When this is true, additional protections must be in place to prevent the theft and leakage of this data through the AI system.

Per the per the HITRUST Glossary:
  • Covered information is “any type of information (including data) subject to security, privacy, and/or risk regulations that is to be secured from unauthorized access, use, disclosure, disruption, modification, or destruction to maintain confidentiality, integrity, and/or availability.
  • Confidential information is as any type of information (including data) that is not to be disclosed to unauthorized persons, processes, or devices.
Adds 3 requirements if true, which deal with protections (encryption, data minimization, added model robustness)
3 Are the model’s architecture and parameters confidential to the organization? The training of an AI model can be a significant undertaking. While some AI models are open source, the inner workings of many AI models and the models themselves represent valuable intellectual property for the organization that created them. When this is true, additional protections must be in place to prevent the theft of the model and leakage of model parameters. Adds 2 requirements if true (encryption, data minimization)

MyCSF uses the collective responses to these questions to appropriately tailor the HITRUST CSF assessment to the specifics of the organization’s AI deployment context. The following table shows the different requirement statement counts possible based on different combinations of responses to these tailoring questions. Note that this table contemplates only a single model type included in the scope of the assessment. As stated above, more than one IT platform can be included in the scope of a HITRUST CSF assessment which leverages an AI model. When this is true, the assessed entity should select all model types that apply for question 1 and should follow a high-water mark approach to answering questions 2 and 3.

Q2: Sensitive data used Q3: Confidential model used? Requirement count
Rule-based AI model
No No 27
Yes No 27
No Yes 27
Yes Yes 27
Predictive AI model
No No 36
Yes No 38
No Yes 38
Yes Yes 40
Generative AI model
No No 41
Yes No 44
No Yes 43
Yes Yes 44
Guidance for External Assessors

These guidelines are a deliverable from the 2024 HITRUST External Assessor AI Working Group, described at the bottom of this page.

This page contains guidelines for the composition External Assessor teams engaged to perform cybersecurity assessments of AI systems.

Recommended expertise, knowledge, and experience

Given the rapid emergence of AI technologies and the associated data privacy and security risks, independent assessments by qualified external assessors are crucial. The assessment team should have diverse skills, including knowledge of AI systems and industry-specific security requirements. These guidelines outline the essential expertise required to perform such assessments, including a deep understanding of AI technology and cybersecurity principles, risk management skills, and regulatory compliance knowledge.

While no single external assessor is expected to meet all of the following attributes, these attributes should be met by the combined expertise, knowledge, and experience of the collective external assessor team. These attributes are additive to the requisite expertise, knowledge, and experience necessary to competently perform cybersecurity consulting and/or attestations outside of the AI context.

It is the recommendation of this HITRUST External Assessor AI Working Group that these guidelines be annually reviewed and updated by future subcommittees of the HITRUST External Assessor Council, and based on feedback from HITRUST’s quality review of completed AI security assessments to incorporate lessons learned.

Essential expertise

  • Understanding of AI technologies and their business context
    • Knowledge of AI model types (e.g., open source vs. not, predictive AI vs. generative AI), platforms, patterns (e.g., RAG), and technical architectures
      • Knowledge and expertise of the team performing the assessment should align with the complexity of the environment being assessed, and ongoing education should be implemented to continuously understand the latest developments
    • Familiarity with AI development frameworks and tools, as well as with the AI software development lifecycle
    • Understanding the business drivers for the rapid emergence of AI in the marketplace
    • Understanding of the risks associated with the adoption of AI without proper risk mitigation techniques

  • Cybersecurity expertise
    • Proficiency in securing AI systems, models and data
    • Demonstrated knowledge/familiarity/awareness with the AI security standards, guidelines, and publications listed here and here.

  • Risk Management Skills
    • Ability to identify and assess risks associated with AI implementations
    • Experience in developing risk mitigation strategies for AI initiatives

  • Professional certifications relating to AI
    • (Not specifically recommended by this working group due to the novelty of the subject matter in the governance, risk, and compliance domain; this will be revisited in future iterations of this document)

Specific knowledge

  • AI security threats
    • Awareness of common security threats to AI systems
    • Understanding of potential vulnerabilities in AI models and datasets
    • Understanding of expanded attack surface for AI enabled systems
    • Understanding of impacts associated with compromised vulnerabilities

  • Regulatory compliance
    • Knowledge of relevant AI-specific regulations (e.g., EU AI Act) and standards (e.g., ISO 42001)
    • Knowledge/familiarity in ensuring AI systems comply with regulatory requirements

Experience

  • Prior assessments
    • Experience conducting engagements focusing on security and/or risk assessment of AI systems.
      • If unable to meet this attribute due to the novelty of the subject matter in the governance, risk, and compliance domain, consider a letter/attestation to file describing actions taken to overcome this experience shortcoming (e.g., through required pre-engagement training)
    • Track record of evaluating risk management controls in AI projects

  • Industry experience
    • Familiarity with various industries implementing AI technologies
    • Understanding of sector-specific security and compliance requirements for AI

Working group membership

These guidelines are a deliverable from the 2024 HITRUST External Assessor AI Working Group, a subcommittee of the 2024 HITRUST External Assessor Council. HITRUST is deeply appreciative of the contributions to from each member of the 2024 HITRUST External Assessor AI Working Group:

How big / how many requirements?

Up to 44 added requirements

As of version 11.4.0 of the HITRUST CSF, the assessment needed to achieve the HITRUST AI Security Certification consists of up to 44 HITRUST CSF requirement statements. See this page for a breakdown of these 44 requirement statements by AI security topic.

However, please consider the following:

  • These 44 requirement statements cannot be assessed in isolation. Instead, they must be added into a HITRUST e1, i1, or r2 assessment. As a result, the total number of requirement statements in the overall assessment will be more than 44 requirements.
    For example:
    • A combined assessment featuring a HITRUST e1 assessment with the cybersecurity for AI deployers factor will include the 44 requirement statements that comprise the HITRUST e1 and up to 44 additional requirement statements needed for the HITRUST cybersecurity assessment and certification.
    • A combined assessment featuring a HITRUST i1 assessment with the cybersecurity for AI deployers factor will include the 182 requirement statements that comprise the HITRUST i1 and up to 44 additional requirement statements needed for the HITRUST cybersecurity assessment and certification.
  • Affected by tailoring: Not all 44 requirement statements will be included in each assessment in MyCSF. Instead, the assessment is tailored to include only a subset of these 44 based on the organization’s responses to tailoring questions. This is true regardless of the type of HITRUST assessment (e1, i1, r2) the AI security assessment is being appended to.
  • Allowance for control deficiencies: All 44 will not need to be fully implemented to achieve the HITRUST AI security certification. Just like HITRUST’s other certifications, there is an allowance for control deficiencies.
  • Will change over time: The HITRUST CSF is constantly updated in light of changes to the cybersecurity threat landscape and in response to changes in the underlying authoritative sources we harmonize. As a result, future versions of the HITRUST CSF may include a different number of requirement statements in the HITRUST cybersecurity assessment and certification.
AI security requirements included

The draft HITRUST CSF requirement statements presented in this section are organized by “topic”. This grouping will not be ported into the HITRUST CSF; instead, the CSF’s existing hierarchy (e.g., categories, domains) will be used. The suggested placement in the existing HITRUST CSF hierarchy for each requirement statement is shown.

The AI security topics used to organize these draft HITRUST CSF requirement statements are as follows:

Topic Requirement statements
AI security threat management 3
AI security governance and oversight 3
Development of AI software 6
AI legal and compliance 3
AI supply chain 4
Model robustness 5
Access to the AI system 7
Encryption of AI assets 2
AI system logging and monitoring 3
Documenting and inventorying AI systems 3
Filtering and sanitizing AI data, inputs, and outputs 3
Resilience of the AI system 2
Total requirement statements 44

 

To understand the mitigations commonly used to help mitigate AI security risks and threats to deployed AI systems, HITRUST analyzed the AI security-specific mitigations discussed in the following authoritative and commercial sources. In the HITRUST lexicon, an “authoritative source” is an externally developed, information-protection-focused framework, standard, guideline, publication, regulation or law. The sources listed below that have been harmonized into the HITRUST CSF as of v11.4.0 are indicated. HITRUST may harmonize more of these sources in future versions of the HITRUST CSF at our discretion and based on your feedback.

No. Source and Link Published by Date or Version Harmonized into the HITRUST CSF as of v11.4.0?
From the European Union Agency for Cybersecurity (ENISA)
1 Securing Machine Learning Algorithms European Union Agency for Cybersecurity (ENISA) 2021 No
From the International Standards Organization (ISO)/International Electrotechnical Commission (IEC)
2 ISO/IEC TR 22989:2022: Information technology — Artificial intelligence — Artificial intelligence concepts and terminology International Standards Organization (ISO)/International Electrotechnical Commission (IEC) 2022 No
3 ISO/IEC TR 24028:2020: Information technology — Artificial intelligence — Overview of trustworthiness in artificial intelligence International Standards Organization (ISO)/International Electrotechnical Commission (IEC) 2020 No
4 ISO/IEC TR 38507:2022: Information technology — Governance of IT — Governance implications of the use of artificial intelligence by organizations International Standards Organization (ISO)/International Electrotechnical Commission (IEC) 2022 No
5 ISO/IEC 42001:2023 Information Technology — Artificial Intelligence — Management system International Standards Organization (ISO)/International Electrotechnical Commission (IEC) 2023 No (being considered for v12.0)
From the National Institute of Standards and Technology (NIST)
6 NIST AI 100-2:E2023: Adversarial Machine Learning: Taxonomy of Attacks and Mitigations National Institute of Standards and Technology (NIST) Jan. 2023 No
From the Open Worldwide Application Security Project (OWASP)
7 OWASP AI Exchange Open Worldwide Application Security Project (OWASP) As of Q3 2024 (living document) Yes
8 OWASP Machine Learning Top 10 Open Worldwide Application Security Project (OWASP) v0.3 Yes
9 OWASP Top 10 for LLM Applications Open Worldwide Application Security Project (OWASP) v1.1.0 Yes
10 LLM AI Cybersecurity & Governance Checklist Open Worldwide Application Security Project (OWASP) Feb. 2024, v1.0 No
From commercial entities
11 The anecdotes AI GRC Toolkit Anecdotes A.I Ltd. 2024 No
12 Databricks AI Security Framework Databricks Version 1.1, Sept. 2024 No
13 Google Secure AI Framework Google June 2023 No
14 HiddenLayer’s 2024 AI Threat Landscape Report HiddenLayer 2024 No
15 Snowflake AI Security Framework Snowflake Inc. 2024 No
From others
16 Deploying AI Systems Securely: Best Practices for Deploying Secure and Resilient AI Systems National Security Agency (NSA) April 2024 No
17 Generative AI Framework for HM Government Central Digital and Data Office, UK Government Jan. 2024 No
18 Guidelines for Secure AI System Development Cybersecurity & Infrastructure Security Agency (CISA) Nov. 2023 No
19 Managing artificial intelligence-specific cybersecurity risks in the financial services sector U.S. Department of the Treasury March 2024 No
20 Mitigating AI Risk: Safety and Security Guidelines for Critical Infrastructure Owners and Operators US Department of Homeland Security April 2024 No
21 MITRE ATLAS (mitigations) The MITRE Corporation As of Q3 2024 (living document) No

 

Relevant to this analysis: Because these documents were created to satisfy different needs and audiences, they contain recommendations that did not apply to the scope of the HITRUST AI Cybersecurity Certification effort. Namely, we removed from consideration recommendations that:

  1. did not relate to AI security for deployed systems,

  2. applied to users of AI systems generally (and not to the deployers of AI systems), as these will be included in version 12 of the HITRUST CSF slated for release in H2 2025, or

  3. did not mitigate security threats specific to or exacerbated by AI but instead mitigated general cybersecurity security threats to traditional IT systems (such as source code leaks via misconfigured repositories), as these are addressed in the underlying HITRUST CSF e1, i1, or r2 assessment that the HITRUST Cybersecurity Certification for Deployed AI Systems is combined with.

The goal of analyzing these sources was not to ensure 100% coverage the AI mitigations discussed. Instead, comparing these sources against one another helped us:

  • Understand the AI security control environment, as well as the applicability of various AI security mitigations to different AI deployment scenarios and model types.
  • Minimize any subjectivity or personal bias we brought with us into the effort regarding these topics.
  • Identify (by omission, consensus, and direct discussion) the AI security mitigations which are generally are and are not employed by organizations who deploy AI systems.
  • Identify the mitigations commonly recommended for identified AI security threats.

Other key inputs into our understanding of the AI security threat landscape included:

  • Interviews with the authors of several of the documents listed above, as well as other cybersecurity leaders, on season 2 of HITRUST’s “Trust Vs.” podcast. These recordings are available here as well as podcast directories such as Apple Podcasts and YouTube Music.
TOPIC: AI security threat management
Identifying security threats to the AI system (What can go wrong and where?)

HITRUST CSF requirement statement [?] (17.03bAISecOrganizational.4)

The organization identifies the relevant AI-specific security threats (e.g., evasion, 
poisoning, prompt injection) to the deployed AI system
(1) prior to deployment of new models,
(2) regularly (at least semiannually) thereafter, and
(3) when security incidents related to the AI system occur.
The organization documents identified AI threat scenarios in a threat register which
minimally contains
(4) a description of the identified AI security threat and
(5) the associated component(s) of the AI system (e.g., training data, models, APIs).

 

Evaluative elements in this requirement statement [?]
1. The organization identifies the relevant AI-specific security threats (e.g., evasion, 
poisoning, prompt injection) to the deployed AI system prior to deployment of new models.
2. The organization identifies the relevant AI-specific security threats (e.g., evasion, 
poisoning, prompt injection) to the deployed AI system regularly (at least semiannually).
3. The organization identifies the relevant AI-specific security threats (e.g., evasion, 
poisoning, prompt injection) to the deployed AI system when security incidents related
to the AI system occur.
4. The organization documents identified AI threat scenarios in a threat register which 
minimally contains a description of the identified AI security threat.
5. The organization documents identified AI threat scenarios in a threat register which 
minimally contains the associated component(s) of the AI system (e.g., training data,
models, APIs).
Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample-based test where possible for each evaluative element.
    Example test(s):
    • For example, select a sample from the threat register to confirm all AI-specific security threats are identified and documented. Further, confirm that the threat register contains a detailed description of the identified AI security threat and the associated component(s) of the AI system (e.g., training data, models, APIs). Further, confirm that the threat register was reviewed and updated if needed at least at the frequency mandated in the requirement statement.

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures indicate the percentage of the organization’s AI-specific security threats that are correctly documented in the threat register. Reviews, tests, or audits are completed by the organization to measure the effectiveness of the implemented controls and to confirm that relevant AI-specific security threats (e.g., evasion, poisoning, prompt injection) to the deployed AI system are identified prior to deployment of new models, regularly (at least at the frequency mandated in the requirement statement) thereafter, and when security incidents related to the AI system occur.

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.

Placement of this requirement in the HITRUST CSF [?]
  • Assessment domain: 17 Risk Management
  • Control category: 03.0 – Risk Management
  • Control reference: 03.b – Performing Risk Assessments

Specific to which parts of the overall AI system? [?]
  • N/A, not AI component-specific

Discussed in which authoritative AI security sources? [?]
  • LLM AI Cybersecurity & Governance Checklist
    Feb. 2024, © The OWASP Foundation
    • Where:
      • 3. Checklist > 3.9. Using or implementing large language models > Bullet #9
      • 3. Checklist > 3.9. Using or implementing large language models > Bullet #10

  • Generative AI framework for HM Government
    2023, Central Digital and Data Office, UK Government
    • Where:
      • Using generative AI safely and responsibly > Security > Security Risks > Practical security recommendations > Bullet 1

  • Securing Machine Learning Algorithms
    2021, © European Union Agency for Cybersecurity (ENISA)
    • Where:
      • 4.1- Security Controls > Specific ML > Implement processes to maintain security levels of ML components over time

Discussed in which commercial AI security sources? [?]
  • Databricks AI Security Framework
    Sept. 2024, © Databricks
    • Where:
      • Control DASF 38: Platform security — vulnerability management (Operations and Platform)

  • Google Secure AI Framework
    June 2023, © Google
    • Where:
      • Step 4. Apply the six core elements of the SAIF > Extend detection and response to bring AI into an organization’s threat universe > Develop understanding of threats that matter for AI usage scenarios, the types of AI used, etc.
      • Step 4. Apply the six core elements of the SAIF > Adapt controls to adjust mitigations and create faster feedback loops for AI deployment > Stay on top of novel attacks including prompt injection, data poisoning and evasion attacks
      • Step 4. Apply the six core elements of the SAIF > Contextualize AI system risks in surrounding business processes > Establish a model risk management framework and build a team that understands AI-related risks

  • HiddenLayer’s 2024 AI Threat Landscape Report
    2024, © HiddenLayer
    • Where:
      • Part 4: Predictions and recommendations > 2. Risk assessment and threat modeling

Control functions against which AI security threats? [?]
Additional information
  • Q: When will this requirement included in an assessment? [?]
    • This requirement will always be added to HITRUST assessments which include the
      Security for AI systems regulatory factor.
    • No other assessment tailoring factors affect this requirement.

  • Q: Will this requirement be externally inheritable? [?] [?]
    • No (dual responsibility). The AI application provider and its AI service providers (if used) are responsible for independently performing this requirement outside of the AI system’s technology stack.
Threat modeling (What are we doing about it?)

HITRUST CSF requirement statement [?] (17.03bAISecOrganizational.5)

The organization performs threat modeling for the AI system to 
(1) evaluate its exposure to identified AI security threats,
(2) identify countermeasures currently in place to mitigate those threats, and
(3) identify any additional countermeasures deemed necessary considering the
organization’s overall risk tolerance and the risk categorization of the AI system.
This activity is performed
(4) upon identification of new AI security threats,
(5) prior to deployment of new models,
(6) regularly (at least semiannually) thereafter, and
(7) when security incidents related to the AI system occur.

 

Evaluative elements in this requirement statement [?]
1. The organization performs threat modeling for the AI system 
to evaluate its exposure to identified AI security threats.
2. The organization performs threat modeling for the AI system 
to identify countermeasures currently in place to mitigate AI security threats.
3. The organization performs threat modeling for the AI system 
to identify any additional countermeasures deemed necessary considering the
organization’s overall risk tolerance and the risk categorization of the AI system.
4. Threat modeling for the AI system is performed upon 
identification of new AI security threats.
5. Threat modeling for the AI system is performed prior to 
deployment of new models.
6. Threat modeling for the AI system is performed regularly 
(at least semiannually).
7. Threat modeling for the AI system is performed when security 
incidents related to the AI system occur.
Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample based test where possible for each evaluative element. Example test(s):
    • For example, select a sample of newly identified AI security threats and security incidents related to the AI system and examine evidence to confirm threat modeling was performed for the AI system. Additionally, examine evidence to confirm threat modeling was performed for the AI system prior to deployment of new models and at least at the frequency mandated in the requirement statement. Further, confirm that the modeling evaluated the exposure to identified AI security threats, identified countermeasures currently in place to mitigate those threats, and identified any additional countermeasures deemed necessary considering the organization’s overall risk tolerance and the risk categorization of the AI system.
    • For example, select a sample of the documented AI system threat modeling to confirm AI security threats are identified. Further, confirm that the modeling evaluates the exposure to identified AI security threats, identifies countermeasures currently in place to mitigate those threats, and identifies any additional countermeasures deemed necessary considering the organization’s overall risk tolerance and the risk categorization of the AI system.

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures indicate the percentage of threats and security incidents for which threat modeling was not performed. Reviews, tests, or audits are completed by the organization to measure the effectiveness of the implemented controls and confirm the organization performs threat modeling for the AI system.

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.

Placement of this requirement in the HITRUST CSF [?]

  • Assessment domain: 17 Risk Management
  • Control category: 03.0 – Risk Management
  • Control reference: 03.b – Performing Risk Assessments

Specific to which parts of the overall AI system? [?]
  • N/A, not AI component-specific

Discussed in which authoritative AI security sources? [?]
  • Guidelines for Secure AI System Development
    Nov. 2023, Cybersecurity & Infrastructure Security Agency (CISA)
    • Where:
      • 1. Secure design > Model the threats to your system
      • 2. Secure development > Identify, track, and protect your assets

  • Generative AI framework for HM Government
    2023, Central Digital and Data Office, UK Government
    • Where:
      • Using generative AI safely and responsibly > Security > Security Risks > Practical Security Recommendations > Bullet 2

  • Securing Machine Learning Algorithms
    2021, © European Union Agency for Cybersecurity (ENISA)
    • Where:
      • 4.1- Security Controls > Technical > Ensure ML projects follow the global process for integrating security into projects
      • 4.1- Security Controls > Technical > Conduct a risk analysis of the ML application

Discussed in which commercial AI security sources? [?]
  • The anecdotes AI GRC Toolkit
    2024, © Anecdotes A.I Ltd.
    • Where: Control 6.2: Threat Modeling

  • Google Secure AI Framework
    June 2023, © Google
    • Where:
      • Step 4. Apply the six core elements of the SAIF > Expand strong security foundations to the AI ecosystem > Perform an analysis to determine what security controls needed to be added due to specific threats, regulations, etc.
      • Step 4. Apply the six core elements of the SAIF > Automate defenses to keep pace with existing and new threats > Identify the list of AI security capabilities focused on securing AI systems, training data pipelines, etc.
      • Step 4. Apply the six core elements of the SAIF > Adapt controls to adjust mitigations and create faster feedback loops for AI deployment > Create a feedback loop

  • HiddenLayer’s 2024 AI Threat Landscape Report
    2024, © HiddenLayer
    • Where:
      • Part 4: Predictions and recommendations > 2. Risk assessment and threat modeling

Control functions against which AI security threats? [?]
Additional information
  • Q: When will this requirement included in an assessment? [?]
    • This requirement will always be added to HITRUST assessments which include the
      Security for AI systems regulatory factor.
    • No other assessment tailoring factors affect this requirement.

  • Q: Will this requirement be externally inheritable? [?] [?]
    • No (dual responsibility). The AI application provider and its AI service providers (if used) are responsible for independently performing this requirement outside of the AI system’s technology stack.
Security evaluations such as AI red teaming (Are the countermeasures working?)

HITRUST CSF requirement statement [?] (07.06hAISecOrganizational.1)

The organization performs security assessments (e.g., AI red teaming, penetration 
testing) of the AI system which include consideration of AI-specific security threats (e.g.,
poisoning, model inversion)
(1) prior to deployment of new models,
(2) prior to deployment of new or significantly modified supporting infrastructure (e.g.,
migration to a new cloud-based AI platform),
(3) regularly (at least annually) thereafter.
The organization
(4) takes appropriate risk treatment measures (including implementing any additional
countermeasures) deemed necessary based on the results.

 

Evaluative elements in this requirement statement [?]
1. The organization performs security assessments (e.g., AI red teaming, penetration 
testing) of the AI system which include consideration of AI-specific security threats (e.g.,
poisoning, model inversion) prior to deployment of new models.
2. The organization performs security assessments (e.g., AI red teaming, penetration 
testing) of the AI system which include consideration of AI-specific security threats (e.g.,
poisoning, model inversion) prior to deployment of new or significantly modified supporting
infrastructure (e.g., migration to a new cloud-based AI platform),
3. The organization performs security assessments (e.g., AI red teaming, penetration 
testing) of the AI system which include consideration of AI-specific security threats (e.g.,
poisoning, model inversion) regularly (at least annually).
4. The organization takes appropriate risk treatment measures (including implementing 
any additional countermeasures) deemed necessary based on the results.
Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample based test where possible for each evaluative element. Example test(s):
    • For example, select a sample of security reports to confirm that security assessments (e.g., AI red teaming, penetration testing) of the AI system which include consideration of AI-specific security threats (e.g., poisoning, model inversion), are conducted at least annually. Further, confirm that assessments are conducted prior to deployment of new models, and that appropriate risk treatment measures (including implementing any additional countermeasures) deemed necessary based on the results, are implemented.

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures indicate the frequency and specifications of the security assessments performed on the AI system. Reviews, tests, or audits are completed by the organization to confirm that the requirements for AI system security testing are completed, and measure the effectiveness of the implemented counter measures.

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.

Placement of this requirement in the HITRUST CSF [?]

  • Assessment domain: 07 Vulnerability Management
  • Control category: 06 – Compliance
  • Control reference: 06.h – Technical Compliance Checking

Specific to which parts of the overall AI system? [?]
  • N/A, not AI component-specific

Discussed in which authoritative AI security sources? [?]
  • OWASP 2023 Top 10 for LLM Applications
    Oct. 2023, © The OWASP Foundation
    • Where:
      • LLM03: Training data poisoning > Prevention and mitigation strategies > Bullet #7
      • LLM05: Supply chain vulnerabilities > Prevention and mitigation strategies > Bullet #7
      • LLM05: Supply chain vulnerabilities > Prevention and mitigation strategies > Bullet #8

  • LLM AI Cybersecurity & Governance Checklist
    Feb. 2024, © The OWASP Foundation
    • Where:
      • 3. Checklist > 3.3. AI Asset Inventory > Bullet #4
      • 3. Checklist > 3.9. Using or implementing large language models > Bullet #7
      • 3. Checklist > 3.9. Using or implementing large language models > Bullet #8
      • 3. Checklist > 3.13. AI Red Teaming > Bullet #1

  • Deploying AI Systems Securely: Best Practices for Deploying Secure and Resilient AI Systems
    Apr 2024, National Security Agency (NSA)
    • Where:
      • Continuously protect the AI system > Validate the AI system before and during use > Bullet 4
      • Continuously protect the AI system > Validate the AI system before and during use > Bullet 7
      • Continuously protect the AI system > Validate the AI system before and during use > Bullet 8
      • Secure AI operation and maintenance > Update and patch regularly > Bullet 1
      • Secure AI operation and maintenance > Conduct audits and penetration testing > Bullet 1

  • Securing Machine Learning Algorithms
    2021, © European Union Agency for Cybersecurity (ENISA)
    • Where:
      • 4.1- Security Controls > Specific ML > Integrate poisoning control after the “model evaluation” phase
      • 4.1- Security Controls > Technical > Ensure ML projects follow the global process for integrating security into projects
      • 4.1- Security Controls > Technical > Assess the exposure level of the model used

Discussed in which commercial AI security sources? [?]

  • Google Secure AI Framework
    June 2023, © Google
    • Where:
      • Step 4. Apply the six core elements of the SAIF > Adapt controls to adjust mitigations and create faster feedback loops for AI deployment > Conduct Red Team exercises to improve safety and security for AI-powered products and capabilities
      • Step 4. Apply the six core elements of the SAIF > Adapt controls to adjust mitigations and create faster feedback loops for AI deployment > Create a feedback loop

  • HiddenLayer’s 2024 AI Threat Landscape Report
    2024, © HiddenLayer
    • Where:
      • Part 4: Predictions and recommendations > 4. Model robustness and validation

  • Snowflake AI Security Framework
    2024, © Snowflake Inc.
    • Where:
      • Attacks on the infrastructure hosting AI services > Mitigations > Security testing and penetration testing

Control functions against which AI security threats? [?]
Additional information
  • Q: When will this requirement included in an assessment? [?]
    • This requirement will always be added to HITRUST assessments which include the
      Security for AI systems regulatory factor.
    • No other assessment tailoring factors affect this requirement.

  • Q: Will this requirement be externally inheritable? [?] [?]
    • No (dual responsibility). The AI application provider and its AI service providers (if used) are responsible for independently performing this requirement outside of the AI system’s technology stack.
TOPIC: AI security governance and oversight
Assign Roles and Resp. for AI

HITRUST CSF requirement statement [?] (01.02aAISecOrganizational.1)

The organization formally defines the roles and responsibilities for the
(1) governance,
(2) security, and
(3) risk management
of the organization's deployed AI systems within the organization (e.g., by extending
a pre-existing RACI chart or creating a new one specific to AI).
The organization formally
(4) assigns human accountability for the actions performed by, outputs produced by,
and decisions made by the organization’s deployed AI systems.

 

Evaluative elements in this requirement statement [?]
1. The organization formally defines the roles and responsibilities for AI governance 
of the organization's deployed AI systems within the organization (e.g., by extending a
pre-existing RACI chart or creating a new one specific to AI).
2. The organization formally defines the roles and responsibilities for AI security 
of the organization's deployed AI systems within the organization (e.g., by extending a
pre-existing RACI chart or creating a new one specific to AI).
3. The organization formally defines the roles and responsibilities for AI risk 
of the organization's deployed AI systems management within the organization (e.g.,
by extending a pre-existing RACI chart or creating a new one specific to AI).
4. The organization formally assigns human accountability for the actions performed 
by, outputs produced by, and decisions made by the organization’s deployed AI systems.
Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample based test where possible for each evaluative element. Example test(s):
    • For example, review AI systems policy and procedure documentation to determine if roles and responsibilities for the governance, security, and risk management of the organization’s deployed AI systems is in place. Further, confirm that the organization assigns human accountability for the actions performed by, outputs produced by, and decisions made by the organization’s deployed AI systems.

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures indicate the completeness of the organization’s AI system documentation. Reviews, tests, or audits are completed by the organization to measure the effectiveness of the implemented controls and to confirm that requirements for AI system documentation is maintained.

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.

Placement of this requirement in the HITRUST CSF [?]

  • Assessment domain: 01 Information Protection Program
  • Control category: 02.0 – Human Resources Security
  • Control reference: 02.a – Roles and Responsibilities

Specific to which parts of the overall AI system? [?]
  • N/A, not AI component-specific

Discussed in which authoritative AI security sources? [?]
  • Generative AI framework for HM Government
    2023, Central Digital and Data Office, UK Government
    • Where:
      • Using generative AI safely and responsibly > Ethics > Accountability and responsibility
      • Using generative AI safely and responsibly > Ethics > Accountability and responsibility > Practical recommendations > Bullet 4
      • Using generative AI safely and responsibly > Ethics > Accountability and responsibility > Practical recommendations > Bullet 3
      • Using generative AI safely and responsibly > Data protection and privacy > Accountability > Practical recommendations > Bullet 1

Control functions against which AI security threats? [?]
Additional information
  • Q: When will this requirement included in an assessment? [?]
    • This requirement will always be added to HITRUST assessments which include the
      Security for AI systems regulatory factor.
    • No other assessment tailoring factors affect this requirement.

  • Q: Will this requirement be externally inheritable? [?] [?]
    • No (dual responsibility). The AI application provider and its AI service providers (if used) are responsible for independently performing this requirement outside of the AI system’s technology stack.
Augment written policies to address AI specificities

HITRUST CSF requirement statement [?] (01.00aAISecOrganizational.1)

As appropriate to the organization’s AI deployment context, the stated scope and 
contents of the organization’s written policies—in areas including but not limited to
(1) security administration,
(2) data governance,
(3) software development,
(4) risk management,
(5) incident management,
(6) business continuity, and
(7) disaster recovery
—explicitly includes the organization’s AI systems and their AI specificities.

 

Evaluative elements in this requirement statement [?]
1. As appropriate to the organization’s AI deployment context, the stated scope and 
contents of the organization’s written policies related to security administration to
explicitly include the organization’s AI systems and their AI specificities.
2. As appropriate to the organization’s AI deployment context, the stated scope and 
contents of the organization’s written policies related to data governance to explicitly
include the organization’s AI systems and their AI specificities.
3. As appropriate to the organization’s AI deployment context, the stated scope and 
contents of the organization’s written policies related to software development to
explicitly include the organization’s AI systems and their AI specificities.
4. As appropriate to the organization’s AI deployment context, the stated scope and 
contents of the organization’s written policies related to risk management to explicitly
include the organization’s AI systems and their AI specificities.
5.As appropriate to the organization’s AI deployment context, the stated scope and 
contents of the organization’s written policies related to incident management to
explicitly include the organization’s AI systems and their AI specificities.
6. As appropriate to the organization’s AI deployment context, the stated scope and 
contents of the organization’s written policies related to business continuity to explicitly
include the organization’s AI systems and their AI specificities.
7. As appropriate to the organization’s AI deployment context, the stated scope and 
contents of the organization’s written policies related to disaster recovery to explicitly
include the organization’s AI systems and their AI specificities.
Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample based test where possible for each evaluative element. Example test(s):
    • For example, review the organizations written policies for AI systems to confirm their completeness. Further, confirm that the AI system policies contain security administration, data governance, software development, risk management, incident management, business continuity, and disaster recovery, explicitly to AI systems and their AI specificities.

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures indicate the completeness of the organization’s AI system documentation. Reviews, tests, or audits are completed by the organization to measure the effectiveness of the implemented controls and to confirm that requirements for AI system documentation is maintained.

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.

Placement of this requirement in the HITRUST CSF [?]

  • Assessment domain: 01 Information Protection Program
  • Control category: 00.0 – Information Security Management Program
  • Control reference: 00.a – Information Security Management Program

Specific to which parts of the overall AI system? [?]
  • N/A, not AI component-specific

Discussed in which authoritative AI security sources? [?]
  • Generative AI framework for HM Government
    2023, Central Digital and Data Office, UK Government
    • Where:
      • Using generative AI safely and responsibly > Accountability > Practical recommendations > Bullet 1
      • Using generative AI safely and responsibly > Accountability > Practical recommendations > Bullet 2
      • Building generative AI solutions > Building the solution >Patterns > Practical recommendations > Bullet 2

  • Securing Machine Learning Algorithms
    2021, © European Union Agency for Cybersecurity (ENISA)
    • Where:
      • 4.1- Security Controls > Organizational > Integrate ML specificities to existing security policies
      • 4.1- Security Controls > Organizational > Ensure ML applications comply with security policies
      • 4.1- Security Controls > Organizational > Ensure ML applications comply with protection policies and are integrated to security operations processes
      • 4.1- Security Controls > Organizational > Ensure ML applications comply with identity management, authentication, and access control policies
      • 4.1- Security Controls > Technical > Ensure ML projects follow the global process for integrating security into projects

Discussed in which commercial AI security sources? [?]
  • The anecdotes AI GRC Toolkit
    2024, © Anecdotes A.I Ltd.
    • Where:
      • Control 1.2: Policy Augmentation
      • Control 8.2: Recovery and Continuity

  • Google Secure AI Framework
    June 2023, © Google
    • Where:
      • Step 4. Apply the six core elements of the SAIF > Expand strong security foundations to the AI ecosystem > Review what existing security controls across the security domains apply to AI systems

Control functions against which AI security threats? [?]
Additional information
  • Q: When will this requirement included in an assessment? [?]
    • This requirement will always be added to HITRUST assessments which include the
      Security for AI systems regulatory factor.
    • No other assessment tailoring factors affect this requirement.

  • Q: Will this requirement be externally inheritable? [?] [?]
    • No (dual responsibility). The AI application provider and its AI service providers are responsible for independently performing this requirement outside of the AI system’s technology stack.
Humans can intervene if needed

HITRUST CSF requirement statement [?] (12.09abAISecSystem.3)

The design of the AI application allows its human operators the ability to 
(1) evaluate AI model outputs before relying on them and
(2) intervene in AI model-initiated actions (e.g., sending emails, modifying records) if
deemed necessary.

 

Evaluative elements in this requirement statement [?]
1. The design of the AI system allows its human operators the ability to evaluate AI 
model outputs before relying on.
2. The design of the AI system allows its human operators the ability to intervene in 
AI model-initiated actions (e.g., sending emails, modifying records) if deemed necessary.
Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample based test where possible for each evaluative element. Example test(s):
    • For example, review the AI application to confirm it allows a human operator to evaluate AI model outputs. Further, confirm the ability for human operators to intervene in AI model-initiated actions when necessary.

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures indicate if the AI application allows human operators to evaluate AI model outputs and intervene in AI model-initiated actions when necessary. Reviews, tests, or audits are completed by the organization to measure the effectiveness of the implemented controls.

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.

Placement of this requirement in the HITRUST CSF [?]

  • Assessment domain: 12 Audit Logging & Monitoring
  • Control category: 09.0 – Communications and Operations Management
  • Control reference: 09.ab – Monitoring System Use

Specific to which parts of the overall AI system? [?]
  • AI application layer:
    • Application AI safety and security systems

Discussed in which authoritative AI security sources? [?]
  • OWASP 2023 Top 10 for LLM Applications
    Oct. 2023, © The OWASP Foundation
    • Where:
      • LLM01: Prompt injection > Prevention and mitigation strategies > Bullet #2
      • LLM03: Training data poisoning > Prevention and mitigation strategies > Bullet #7
      • LLM07: Insecure plugin design > Prevention and mitigation strategies > Bullet #6
      • LLM08: Excessive agency > Prevention and mitigation strategies > Bullet #6
      • LLM09: Overreliance > Prevention and mitigation strategies > Bullet #1

  • Generative AI framework for HM Government
    2023, Central Digital and Data Office, UK Government
    • Where:
      • Building generative AI solutions > Building the solution > Getting reliable results > Bullet 9
      • Building generative AI solutions > Building the solution > Getting reliable results > Bullet 10
      • Using generative AI safely and responsibly > Ethics > Accountability and responsibility > Practical recommendations > Bullet 6
      • Using generative AI safely and responsibly > Ethics > Accountability and responsibility > Practical recommendations > Bullet 5
      • Using generative AI safely and responsibly > Ethics > Maintaining appropriate human involvement in automated processes > Bullet 3
      • Using generative AI safely and responsibly > Ethics > Maintaining appropriate human involvement in automated processes > Bullet 5
      • Building generative AI solutions > Building the solution > Data management > Practical recommendations > Bullet 2

Discussed in which commercial AI security sources? [?]

  • Databricks AI Security Framework
    Sept. 2024, © Databricks
    • Where:
      • DASF 29: Build MLOps workflows with human-in-the-loop (HILP) with permissions, versions and approvals to promote models to production

  • Snowflake AI Security Framework
    2024, © Snowflake Inc.
    • Where:
      • Prompt injection > Mitigations > Human-in-the-loop systems
      • Indirect prompt injection > Mitigations > Human-in-the-loop systems

Control functions against which AI security threats? [?]
Additional information
  • Q: When will this requirement included in an assessment? [?]
    • This requirement will always be added to HITRUST assessments which include the
      Security for AI systems regulatory factor.
    • No other assessment tailoring factors affect this requirement.

  • Q: Will this requirement be externally inheritable? [?] [?]
    • No. Implementing and/or configuring this requirement is the AI application provider’s sole responsibility.
TOPIC: Development of AI software
Provide AI security training to AI builders and deployers

HITRUST CSF requirement statement [?] (13.02eAISecOrganizational.1)

The organization provides training no less than annually on AI security topics (e.g., 
vulnerabilities, threats, organizational policy requirements) for all teams involved in
AI software and model creation and deployment, including (as applicable)
(1) development,
(2) data science, and
(3) cybersecurity
personnel.

 

Evaluative elements in this requirement statement [?]
1. The organization provides training no less than annually on AI security topics 
(e.g., vulnerabilities, threats, organizational policy requirements) to development
personnel.
2. The organization provides training no less than annually on AI security topics 
(e.g., vulnerabilities, threats, organizational policy requirements) to data science
personnel.
3. The organization provides training no less than annually on AI security topics 
(e.g., vulnerabilities, threats, organizational policy requirements) to cybersecurity
personnel.
Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample based test where possible for each evaluative element. Example test(s):
    • For example, review the AI security training documentation to confirm that training is conducted no less than annually on AI security topics (e.g., vulnerabilities, threats, organizational policy requirements) for all teams involved in AI software and model creation and deployment. Further, confirm the training includes all development, data science, and cybersecurity personnel.

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures indicate the percentage of personnel in AI system roles who have received AI security training. Reviews, tests, or audits are completed by the organization to measure the effectiveness of the implemented controls and confirm that all personnel in AI system roles receive AI security training no less than annually.

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.

Placement of this requirement in the HITRUST CSF [?]

  • Assessment domain: 13 Education, Training and Awareness
  • Control category: 02.0 – Human Resources Security
  • Control reference: 02.e – Information Security Awareness, Education, and Training

Specific to which parts of the overall AI system? [?]
  • N/A, not AI component-specific

Discussed in which authoritative AI security sources? [?]
  • Securing Machine Learning Algorithms
    2021, © European Union Agency for Cybersecurity (ENISA)
    • Where:
      • 4.1- Security Controls > Specific ML > Integrate ML specificities to awareness strategy and ensure all ML stakeholders are receiving it

Discussed in which commercial AI security sources? [?]
  • The anecdotes AI GRC Toolkit
    2024, © Anecdotes A.I Ltd.
    • Where:
      • Control 6.1: Training

  • Databricks AI Security Framework
    Sept. 2024, © Databricks
    • Where:
      • Control DASF 41: Platform security — secure SDLC
      • Resources and Further Reading > AI and Machine Learning on Databricks

  • Google Secure AI Framework
    June 2023, © Google
    • Where:
      • Step 4. Apply the six core elements of the SAIF > Expand strong security foundations to the AI ecosystem > Retrain and retain

  • HiddenLayer’s 2024 AI Threat Landscape Report
    2024, © HiddenLayer
    • Where:
      • Part 4: Predictions and recommendations > 5. Secure development practices > Bullet 1

  • Snowflake AI Security Framework
    2024, © Snowflake Inc.
    • Where:
      • Backdooring models (insider attacks) > Mitigations > Secure development practices
      • Attacks on the infrastructure hosting AI services > Mitigations > Security awareness training

Control functions against which AI security threats? [?]
Additional information
  • Q: When will this requirement included in an assessment? [?]
    • This requirement will always be added to HITRUST assessments which include the
      Security for AI systems regulatory factor.
    • No other assessment tailoring factors affect this requirement.

  • Q: Will this requirement be externally inheritable? [?] [?]
    • No (dual responsibility). The AI application provider and its AI service providers (if used) are responsible for independently performing this requirement outside of the AI system’s technology stack.
Version control of AI assets

HITRUST CSF requirement statement [?] (06.10jAISecSystem.1)

AI assets, including
(1) code to create, train, and/or deploy AI models;
(2) training datasets;
(3) fine-tuning datasets;
(4) RAG datasets (if used);
(5) configurations of pipelines used to create, train, and/or deploy AI models;
(6) code used by language model tools such as agents and plugins (if used); and
(7) models
are versioned and tracked.

 

Evaluative elements in this requirement statement [?]
1. Code used to create, train, and/or deploy AI models is versioned and tracked.
2. Training datasets are versioned and tracked.
3. Fine-tuning datasets are versioned and tracked.
4. RAG datasets versioned and tracked.
5. Configurations of pipelines used to create, train, and/or deploy AI models are versioned and tracked.
6. Code used by language model tools such as agents and plugins is versioned and tracked.
7. AI models are versioned and tracked.
Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample based test where possible for each evaluative element. Example test(s):
    • For example, evidence that the AI assets listed in the requirement statement are each versioned and tracked (as applicable).

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures indicate percentage of the organization’s AI assets which are versioned and tracked of total. Reviews, tests, or audits are completed by the organization to measure the effectiveness of the implemented controls.

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.

Placement of this requirement in the HITRUST CSF [?]

  • Assessment domain: 06 Configuration Management
  • Control category: 10.0 – Information Systems Acquisition, Development, and Maintenance
  • Control reference: 10.j Access Control to Program Source Code

Specific to which parts of the overall AI system? [?]
AI application layer:
  • AI plugins and agents
  • The deployed AI application (Considered in the associated HITRUST e1, i1, or r2 assessment)
AI platform layer
  • The AI platform and associated APIs (Considered in the associated HITRUST e1, i1, or r2 assessment)
  • Model engineering environment and model pipeline

Discussed in which authoritative AI security sources? [?]
Discussed in which commercial AI security sources? [?]
  • Databricks AI Security Framework
    Sept. 2024, © Databricks
    • Where:
      • Control DASF 10: Version data
      • Control DASF 17: Track and reproduce the training data used for ML model training
      • Control DASF 52: Source code control

Control functions against which AI security threats? [?]
Additional information
  • Q: When will this requirement included in an assessment? [?]
    • This requirement will always be added to HITRUST assessments which include the
      Security for AI systems regulatory factor.
    • No other assessment tailoring factors affect this requirement.

  • Q: When is this requirement applicable, and when could it be inapplicable?
    • This requirement applies regardless of the model’s provenance and regardless of the AI system architecture.
    • Element #2 is only applicable when language model tools such as agents or plugins are used.

  • Q: Will this requirement be externally inheritable? [?] [?]
    • Yes, fully. This requirement may be the sole responsibility of the AI platform provider and/or model creator. Or, depending on the AI system’s architecture, only evaluative elements that are the sole responsibility of the AI platform provider and/or model creator apply.
Inspection of AI software assets

HITRUST CSF requirement statement [?] (07.10mAISecOrganizational.6)

To find potentially exploitable vulnerabilities, the organization inspects downloaded AI software assets 
before use—including
(1) models (e.g., such as those sourced from online model zoos);
(2) software packages used to create, train, and/or deploy models (e.g., python packages); and
(3) language model tools such as agents and plugins (if used).
The organization
(4) acts upon the results, as necessary.

 

Evaluative elements in this requirement statement [?]
1. To find potentially exploitable vulnerabilities, the organization inspects downloaded AI models 
before use (e.g., such as those sourced from online model zoos).
2. To find potentially exploitable vulnerabilities, the organization inspects downloaded software 
packages (e.g., python packages) used to create, train, and/or deploy models before use.
3. To find potentially exploitable vulnerabilities, the organization inspects downloaded 
language model tools such as agents and plugins before use (if used).
4. The organization acts upon the results, as necessary.
Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample based test where possible for each evaluative element. Example test(s):
    • For example, select a sample of downloaded AI software assets and confirm they were inspected before use. Further, confirm that the organization acts upon the inspection results, as necessary.

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures indicate percentage of the organization’s downloaded AI software assets that were not inspected before use. Reviews, tests, or audits are completed by the organization to measure the effectiveness of the implemented controls and to confirm that all AI software assets are inspected before use.

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.

Placement of this requirement in the HITRUST CSF [?]

  • Assessment domain: 07 Vulnerability Management
  • Control category: 10.0 – Information Systems Acquisition, Development, and Maintenance
  • Control reference: 10.m – Control of Technical Vulnerabilities

Specific to which parts of the overall AI system? [?]
AI application layer:
  • The deployed AI application (Considered in the associated HITRUST e1, i1, or r2 assessment)
AI platform layer:
  • The deployed AI model
  • Model engineering environment and model pipeline

Discussed in which authoritative AI security sources? [?]
  • OWASP 2023 Top 10 for LLM Applications
    Oct. 2023, © The OWASP Foundation
    • Where:
      • LLM05: Supply chain vulnerabilities > Prevention and mitigation strategies > Bullet #8
      • LLM07: Insecure plugin design > Prevention and mitigation strategies > Bullet #3

Discussed in which commercial AI security sources? [?]
  • Databricks AI Security Framework
    Sept. 2024, © Databricks
    • Where:
      • Control DASF 41: Platform security – Secure SDLC
      • Control DASF 53: Third-party library control

Control functions against which AI security threats? [?]
Additional information
  • Q: When will this requirement included in an assessment? [?]
    • This requirement will always be added to HITRUST assessments which include the
      Security for AI systems regulatory factor.
    • No other assessment tailoring factors affect this requirement.

  • Q: When is this requirement applicable, and when could it be inapplicable?
    • This requirement applies regardless of the model’s provenance and regardless of the AI system architecture.
    • Element #1 is only applicable when the AI system uses an AI model sourced from an online model zoo / model hub.
    • Element #2 is only applicable when the organization uses software packages downloaded from online package repositories.
    • Element #3 is only applicable when language model tools such as agents or plugins are used.

  • Q: Will this requirement be externally inheritable? [?] [?]
    • Yes, fully. This requirement may be the sole responsibility of the AI platform provider and/or model creator. Or, depending on the AI system’s architecture, only evaluative elements that are the sole responsibility of the AI platform provider and/or model creator provider apply.
Change control over AI models

HITRUST CSF requirement statement [?] (06.09bAISecSystem.1)

Changes to AI models (including upgrading to new model versions and moving to 
completely different models) are consistently
(1) documented,
(2) tested, and
(3) approved
in accordance with the organization’s software change control policy prior to deployment.
When upgrading to a newer version of an externally developed model, the organization
(4) obtains and reviews the release notes describing the model's update.

 

Evaluative elements in this requirement statement [?]
1. Changes to AI models (including upgrading to new model versions and moving to 
completely different models) are consistently documented in accordance with the
organization’s software change control policy prior to deployment.
2. Changes to AI models (including upgrading to new model versions and moving to 
completely different models) are consistently tested in accordance with the organization’s
software change control policy prior to deployment.
3. Changes to AI models (including upgrading to new model versions and moving to 
completely different models) are consistently approved in accordance with the organization’s
software change control policy prior to deployment.
4. When upgrading to a newer version of an externally developed model, the organization 
obtains and reviews the release notes describing the model's update.
Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample based test where possible for each evaluative element. Example test(s):
    • For example, select a sample of the AI models change documentation to confirm all changes were documented. Further, confirm that the AI models change documentation includes testing and approval information in accordance with the organization’s software change control policy, prior to deployment.
    • Further, confirm that the organization obtained and reviewed the release notes when upgrading to newer versions of externally developed models.

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures indicate percentage of the organization’s AI models that received changes without documentation. Reviews, tests, or audits are completed by the organization to measure the effectiveness of the implemented controls and to confirm that all AI model changes are consistently documented, tested, and approved in accordance with the organization’s software change control policy prior to deployment.

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.

Placement of this requirement in the HITRUST CSF [?]

  • Assessment domain: 06 Configuration Management
  • Control category: 09.0 – Communications and Operations Management
  • Control reference: 09.b – Change Management

Specific to which parts of the overall AI system? [?]
AI platform layer:
  • The deployed AI model

Discussed in which authoritative AI security sources? [?]
  • ISO/IEC 42001:2023 Information technology — Artificial intelligence — Management system
    2023, © International Standards Organization (ISO)/International Electrotechnical Commission (IEC)
    • Where:
      • 8. Operation > Operational planning and control > Paragraph 5
      • Annex A > A.6. AI system life cycle > A.6.2.2. AI system requirements and specification
      • Annex A > A.6. AI system life cycle > A.6.2.3. Documentation of AI system design and development
      • Annex A > A.6. AI system life cycle > A.6.2.4. AI system verification and validation
      • Annex A > A.6. AI system life cycle > A.6.2.5. AI system deployment

  • OWASP AI Exchange
    2024, © The OWASP Foundation
  • Guidelines for Secure AI System Development
    Nov. 2023, Cybersecurity & Infrastructure Security Agency (CISA)
    • Where:
      • 4. Secure operation and maintenance > Follow a secure by design approach to updates

  • Deploying AI Systems Securely: Best Practices for Deploying Secure and Resilient AI Systems
    Apr 2024, National Security Agency (NSA)
    • Where:
      • Continuously protect the AI system > Validate the AI system before and during use > Bullet 3
      • Continuously protect the AI system > Validate the AI system before and during use > Bullet 4
      • Secure AI operation and maintenance > Update and patch regularly > Bullet 1

  • Securing Machine Learning Algorithms
    2021, © European Union Agency for Cybersecurity (ENISA)
    • Where:
      • 4.1- Security Controls > Organizational > Apply documentation requirements to AI projects

Discussed in which commercial AI security sources? [?]
  • Databricks AI Security Framework
    Sept. 2024, © Databricks
    • Where:
      • Control DASF 19: Manage end-to-end machine learning lifecycle
      • Control DASF 23: Register, version, approve, promote, and deploy model
      • Control DASF 29: Build MLOps workflows
      • Control DASF 41: Platform security – Secure SDLC
      • Control DASF 42: Employ data-centric MLOps and LLMOps
      • Control DASF 45: Evaluate models
      • Control DASF 49: Automate LLM evaluation

Control functions against which AI security threats? [?]
Additional information
  • Q: When will this requirement included in an assessment? [?]
    • This requirement will always be added to HITRUST assessments which include the
      Security for AI systems regulatory factor.
    • No other assessment tailoring factors affect this requirement.

  • Q: When is this requirement applicable, and when could it be inapplicable?
    • This requirement applies regardless of the model’s provenance and regardless of the AI system architecture.
    • This requirement may be inapplicable if no new models or model versions were deployed during the past 12 months.

  • Q: Will this requirement be externally inheritable? [?] [?]
    • Yes, partially. This may be a responsibility shared between an AI application provider and their AI platform provider (if used), performed independently on separate layers/components of the overall AI system.
Change control over language model tools

HITRUST CSF requirement statement [?] (06.10hAISecSystem.5)

Changes to language model tools such as agents and plugins are consistently
(1) documented,
(2) tested, and
(3) approved
in accordance with the organization’s software change control policy prior to deployment.

 

Evaluative elements in this requirement statement [?]
1. Changes to language model tools such as agents and plugins are consistently 
documented in accordance with the organization’s software change control policy prior
to deployment.
2. Changes to language model tools such as agents and plugins are consistently 
tested in accordance with the organization’s software change control policy prior to
deployment.
3. Changes to language model tools such as agents and plugins are consistently 
approved in accordance with the organization’s software change control policy prior
to deployment.
Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample-based test where possible for each evaluative element. Example test(s):
    • For example, select a sample of the language model change documentation to confirm all changes were documented. Further, confirm that the change documentation includes testing and approval information in accordance with the organization’s software change control policy, prior to deployment.

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures indicate percentage of the organization’s language model that received changes without documentation. Reviews, tests, or audits are completed by the organization to measure the effectiveness of the implemented controls and to confirm that all changes to language model tools such as agents and plugins are consistently documented, tested, and approved in accordance with the organization’s software change control policy prior to deployment.

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.

Placement of this requirement in the HITRUST CSF [?]

  • Assessment domain: 06 Configuration Management
  • Control category: 10.0 – Information Systems Acquisition, Development, and Maintenance
  • Control reference: 10h- Control of Operational Software

Specific to which parts of the overall AI system? [?]
AI application layer:
  • AI plugins and agents

Discussed in which authoritative AI security sources? [?]
  • ISO/IEC 42001:2023 Information technology — Artificial intelligence — Management system
    2023, © International Standards Organization (ISO)/International Electrotechnical Commission (IEC)
    • Where:
      • 8. Operation > Operational planning and control > Paragraph 5
      • Annex A > A.6. AI system life cycle > A.6.2.2. AI system requirements and specification
      • Annex A > A.6. AI system life cycle > A.6.2.3. Documentation of AI system design and development
      • Annex A > A.6. AI system life cycle > A.6.2.4. AI system verification and validation
      • Annex A > A.6. AI system life cycle > A.6.2.5. AI system deployment

  • Securing Machine Learning Algorithms
    2021, © European Union Agency for Cybersecurity (ENISA)
    • Where:
      • 4.1- Security Controls > Organizational > Apply documentation requirements to AI projects_

Control functions against which AI security threats? [?]
Additional information
  • Q: When will this requirement included in an assessment? [?]
    • This requirement is only included when the assessment’s in-scope AI system leverages a generative AI model.
    • The Security for AI systems regulatory factor must also be present in the assessment.

  • Q: When is this requirement applicable, and when could it be inapplicable?
    • This requirement is only applicable when language model tools such as agents or plugins are used.

  • Q: Will this requirement be externally inheritable? [?] [?]
    • No. Implementing and/or configuring this requirement is the AI application provider’s sole responsibility.
Documentation of AI specifics during system design and development

HITRUST CSF requirement statement [?] (06.10hAISecSystem.6)

Documentation of the overall AI system discusses the creation, operation, and lifecycle 
management of any
(1) models,
(2) datasets (including data used for training, tuning, and prompt enhancement via RAG),
(3) configurations (e.g., metaprompts),
(4) language model tools such as agents and plugins
maintained by the organization, as applicable.
Documentation of the overall AI system also describes the
(5) tooling resources (e.g., AI platforms, model engineering environments, pipeline configurations),
(6) system and computing resources, and
(7) human resources
needed for the development and operation of the AI system.

 

Evaluative elements in this requirement statement [?]
1. Documentation of the overall AI system discusses the creation, operation, and 
lifecycle management of any models maintained by the organization, as applicable.
2. Documentation of the overall AI system discusses the creation, operation, and 
lifecycle management of any datasets maintained by the organization (including data
used for training, tuning, and prompt enhancement via RAG), as applicable.
3. Documentation of the overall AI system discusses the creation, operation, and 
lifecycle management of any AI-relevant configurations (e.g., metaprompts) maintained by the
organization, as applicable.
4. Documentation of the overall AI system discusses the creation, operation, and 
lifecycle management of any language model tools such as agents and plugins maintained
by the organization, as applicable.
5. Documentation of the overall AI system describes the tooling resources (e.g., AI 
platforms, model engineering environments, pipeline configurations) needed for the development and operation
of the AI system.
6. Documentation of the overall AI system describes the system and computing resources
needed for the development and operation of the AI system.
7. Documentation of the overall AI system describes the human resources needed for the
development and operation of the AI system.
Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample based test where possible for each evaluative element. Example test(s):
    • For example, review the overall AI system documentation to confirm it discusses the creation, operation, and lifecycle management of any models, datasets (including data used for training, tuning, and prompt enhancement via RAG), configurations (e.g., metaprompts), language model tools such as agents and plugins maintained by the organization, as applicable. Further, confirm the documentation of the overall AI system also describes the tooling resources (e.g., AI platforms, model engineering environments), system and computing resources, and human resources needed for the development and operation of the AI system.

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures indicate completeness of the organization’s overall AI system documentation that discusses the required elements in the requirement statement. Reviews, tests, or audits are completed by the organization to measure the effectiveness of the implemented controls and to confirm that all the required elements are included in the overall AI system documentation.

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.

Placement of this requirement in the HITRUST CSF [?]

  • Assessment domain: 06 Configuration Management
  • Control category: 10.0 – Information Systems Acquisition, Development, and Maintenance
  • Control reference: 10h- Control of Operational Software

Specific to which parts of the overall AI system? [?]
AI application layer:
  • AI plugins and agents
  • Prompt augmentations (e.g., via RAG) and associated data sources
  • Application AI safety and security systems
  • The deployed AI application (Considered in the associated HITRUST e1, i1, or r2 assessment)
AI platform layer:
  • The AI platform and associated APIs (Considered in the associated HITRUST e1, i1, or r2 assessment)
  • Model safety and security systems
  • Model tuning and associated datasets
  • The deployed AI model
  • Model engineering environment and model pipeline
  • AI datasets and data pipelines

Discussed in which authoritative AI security sources? [?]
  • Securing Machine Learning Algorithms
    2021, © European Union Agency for Cybersecurity (ENISA)
    • Where:
      • 4.1- Security Controls > Technical > Ensure ML projects follow the global process for integrating security into projects
      • 4.1- Security Controls > Organizational > Apply documentation requirements to AI projects

Discussed in which commercial AI security sources? [?]
Control functions against which AI security threats? [?]
Additional information
  • Q: When will this requirement included in an assessment? [?]
    • This requirement will always be added to HITRUST assessments which include the
      Security for AI systems regulatory factor.
    • No other assessment tailoring factors affect this requirement.

  • Q: Will this requirement be externally inheritable? [?] [?]
    • Yes, partially. This may be a responsibility shared between an AI application provider and their AI platform provider (if used), performed independently on separate layers/components of the overall AI system.
Linkage between dataset, model, and pipeline config

HITRUST CSF requirement statement [?] (06.10hAISecSystem.7)

When creating machine learning-based AI models, the organization 
(1) explicitly documents a linkage between the versions of the training dataset used, the pipeline
configuration used, and resulting AI model.

 

Evaluative elements in this requirement statement [?]
1. When creating machine learning-based AI models, the organization explicitly documents a 
linkage between the versions of the training dataset used, the pipeline configuration used, and
resulting AI model.
Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample based test where possible for each evaluative element. Example test(s):
    • For example, select a sample of machine learning-based AI model versions released by the organization during the past year and ensure that the associated documentation contains an explicit linkage between the versions of the training dataset used, the pipeline configuration used, and the resulting AI model.

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures indicate the percentage of ML-based AI models produced by the organization with this linkage documented as a percentage of total.

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.


Placement of this requirement in the HITRUST CSF [?]

  • Assessment domain: 06 Configuration Management
  • Control category: 10.0 – Information Systems Acquisition, Development, and Maintenance
  • Control reference: 10h- Control of Operational Software
Specific to which parts of the overall AI system? [?]

  • AI platform layer:
    • AI datasets and data pipelines
    • The AI model itself
    • Model-serving infrastructure and APIs
    • Model pipeline and model engineering environment
    • Model-level configurations (e.g., hyperparameters)
Discussed in which authoritative AI security sources? [?]
Discussed in which commercial AI security sources? [?]
  • Databricks AI Security Framework
    Sept. 2024, © Databricks
    • Where:
      • Control DASF 10: Version data
      • Control DASF 17: Track and reproduce the training data used for ML model training
      • Control DASF 52: Source code control

Control functions against which AI security threats? [?]
Additional information
  • Q: When will this requirement included in an assessment? [?]
    • The Security for AI systems regulatory factor must be present in the assessment.
    • This requirement only applies when machine learning-based AI models are in use (generative, predictive).

  • Q: Will this requirement be externally inheritable? [?] [?]
    • Yes, fully. This requirement may be the sole responsibility of the AI model creator. Or, depending on the AI system’s architecture, only evaluative elements that are the sole responsibility of the AI model creator apply.
Verification of origin and integrity of AI assets

HITRUST CSF requirement statement [?] (07.10mAISecOrganizational.2)

To help ensure that unsafe assets are not introduced into the AI system, the 
organization checks the cryptographic hashes and/or digital signatures on downloaded AI
(1) models;
(2) software packages (e.g., those for model creation, training, and/or deployment);
(3) datasets (e.g., training datasets); and
(4) language model tools (e.g., agents, plugins)
before use, as applicable.

 

Evaluative elements in this requirement statement [?]
1. To help ensure that unsafe assets are not introduced into the AI system, the 
organization checks the cryptographic hashes and/or digital signatures on downloaded
AI models before use, if applicable.
2. To help ensure that unsafe assets are not introduced into the AI system, the 
organization checks the cryptographic hashes and/or digital signatures on
downloaded AI software packages (e.g., those used for model creation, training, and/or
deployment) before use, if applicable.
3. To help ensure that unsafe assets are not introduced into the AI system, the 
organization checks the cryptographic hashes and/or digital signatures on
downloaded AI datasets (e.g., training datasets) before use, if applicable.
4. To help ensure that unsafe assets are not introduced into the AI system, the 
organization checks the cryptographic hashes and/or digital signatures on
downloaded language model tools (e.g., agents, plugins) before use, if applicable.
Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample based test where possible for each evaluative element. Example test(s):
    • For example, review the AI system to ensure the organization verifies the cryptographic hashes and/or digital signatures on downloaded AI models, software packages (e.g., those for model creation, training, and/or deployment), datasets (e.g., training datasets), and language model tools (e.g., agents, plugins) before use, as applicable.

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures indicate if the organization verifies the cryptographic hashes and/or digital signatures on downloaded AI models, software packages (e.g., those for model creation, training, and/or deployment), datasets (e.g., training datasets), and language model tools (e.g., agents, plugins) before use, as applicable.

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.

Placement of this requirement in the HITRUST CSF [?]

  • Assessment domain: 07 Vulnerability Management
  • Control category: 10.0 – Information Systems Acquisition, Development, and Maintenance
  • Control reference: 10.m – Control of Technical Vulnerabilities

Specific to which parts of the overall AI system? [?]
AI application layer:
  • AI plugins and agents
AI platform layer:
  • The deployed AI model
  • Model engineering environment and model pipeline
  • AI datasets and data pipelines

Discussed in which authoritative AI security sources? [?]
Discussed in which commercial AI security sources? [?]

  • Databricks AI Security Framework
    Sept. 2024, © Databricks
    • Where:
      • DASF 10: Version data
      • DASF 11: Capture and view data lineage
      • DASF 22: Build models with all representative, accurate and relevant data sources to minimize third-party dependencies for models and data where possible
      • DASF 27: Pretrain a large language model (LLM) on your own IP
      • DASF 53: Third-party library control
      • DASF 42: Data-centric MLOps and LLMOps promote models as code using CI/CD.

  • Snowflake AI Security Framework
    2024, © Snowflake Inc.
    • Where:
      • Model poisoning > Mitigations > Bullet 1
      • Self-hosted OSS LLMs Security > Mitigations > Cryptographic signing

Control functions against which AI security threats? [?]
Additional information
  • Q: When will this requirement included in an assessment? [?]
    • This requirement will always be added to HITRUST assessments which include the
      Security for AI systems regulatory factor.
    • No other assessment tailoring factors affect this requirement.

  • Q: Will this requirement be externally inheritable? [?] [?]
    • Yes, fully. This requirement may be the sole responsibility of the AI platform provider and/or model creator. Or, depending on the AI system’s architecture, only evaluative elements that are the sole responsibility of the AI platform provider and/or model creator apply.
TOPIC: AI supply chain
Due diligence review of AI providers

HITRUST CSF requirement statement [?] (14.09fAISecSystem.1)

The organization performs an evaluation of the security posture of any external 
commercial providers of AI system components, including AI
(1) models;
(2) datasets;
(3) software packages (e.g., those for model creation, training, and/or deployment);
(4) platforms and computing infrastructure; and
(5) language model tools such as agents and plugins, as applicable.
This evaluation is performed
(6) during onboarding of the provider and
(7) on a routine basis thereafter
in accordance with the organization’s supplier oversight processes.

 

Evaluative elements in this requirement statement [?]
1. The organization performs an evaluation of the security posture of any external 
commercial providers of AI models, if applicable.
2. The organization performs an evaluation of the security posture of any external 
commercial providers of AI datasets, if applicable.
3. The organization performs an evaluation of the security posture of any external 
commercial providers of AI software packages (e.g., those for model creation, training,
and/or deployment), if applicable.
4. The organization performs an evaluation of the security posture of any external 
commercial providers of AI platforms and computing infrastructure, if applicable.
5. The organization performs an evaluation of the security posture of any external 
commercial providers of language model tools such as agents and plugins, if applicable.
6. This evaluation is performed during onboarding of the provider.
7. This evaluation is performed on a routine basis thereafter in accordance with the organization’s supplier oversight processes.
Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample based test where possible for each evaluative element. Example test(s):
    • For example, review documentation associated with a sample of external commercial providers of AI system components to ensure the organization performed an evaluation of their security posture. Further, confirm that this evaluation is conducted during the provider’s onboarding and on a routine basis thereafter, in accordance with the organization’s supplier oversight processes.

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures indicate if the organization performs an evaluation of the security posture of any external commercial providers of AI system components, including AI models, datasets, software packages (e.g., those for model creation, training, and/or deployment), platforms and computing infrastructure, and language model tools such as agents and plugins, as applicable. Reviews, tests, or audits are completed by the organization to measure the effectiveness of the implemented controls and to confirm that all external commercial providers of AI system components are evaluated.

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.

Placement of this requirement in the HITRUST CSF [?]

  • Assessment domain: 14 Third-party Assurance
  • Control category: 05.0 – Organization of Information Security
  • Control reference: 09.f – Monitoring and Review of Third Party Services

Specific to which parts of the overall AI system? [?]
AI application layer:
  • AI plugins and agents
  • The AI application’s supporting IT infrastructure (Considered in the associated HITRUST e1, i1, or r2 assessment)
AI platform layer:
  • The AI platform and associated APIs (Considered in the associated HITRUST e1, i1, or r2 assessment)
  • The deployed AI model
  • Model engineering environment and model pipeline
  • AI datasets and data pipelines
  • AI compute infrastructure (Considered in the associated HITRUST e1, i1, or r2 assessment)

Discussed in which authoritative AI security sources? [?]
  • OWASP 2023 Top 10 for LLM Applications
    Oct. 2023, © The OWASP Foundation
    • Where:
      • LLM05: Supply chain vulnerabilities > Prevention and mitigation strategies > Bullet #1
      • LLM05: Supply chain vulnerabilities > Prevention and mitigation strategies > Bullet #10

  • LLM AI Cybersecurity & Governance Checklist
    Feb. 2024, © The OWASP Foundation
    • Where:
      • 3. Checklist > 3.8. Regulatory > Bullet #5
      • 3. Checklist > 3.8. Regulatory > Bullet #7
      • 3. Checklist > 3.8. Regulatory > Bullet #8
      • 3. Checklist > 3.8. Regulatory > Bullet #9
      • 3. Checklist > 3.9. Using or implementing large language model solutions > Bullet #11
      • 3. Checklist > 3.9. Using or implementing large language model solutions > Bullet #12

  • Guidelines for Secure AI System Development
    Nov. 2023, Cybersecurity & Infrastructure Security Agency (CISA)
    • Where:
      • 1. Secure design > Design your system for security as well as functionality and performance
      • 2. Secure development > Secure your supply chain

Discussed in which commercial AI security sources? [?]
Control functions against which AI security threats? [?]
Additional information
  • Q: When will this requirement included in an assessment? [?]
    • This requirement will always be added to HITRUST assessments which include the
      Security for AI systems regulatory factor.
    • No other assessment tailoring factors affect this requirement.

  • Q: Will this requirement be externally inheritable? [?] [?]
    • No (dual responsibility). The AI application provider and its AI service providers are responsible for independently performing this requirement outside of the AI system’s technology stack.
Review the model card of models used by the AI system

HITRUST CSF requirement statement [?] (14.05iAISecOrganizational.1)

For externally sourced AI models deployed by the organization in production 
applications, the organization
(1) reviews the model card prior to deployment.

 

Evaluative elements in this requirement statement [?]
1. For externally sourced AI models deployed by the organization in production 
applications, the organization reviews the model card prior to deployment.
Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample based test where possible for each evaluative element. Example test(s):
    • For example, review the externally sourced AI models deployed in production applications and confirm the organization reviews the model card prior to deployment.

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures indicate percentage of the organization’s externally sourced AI models deployed in production applications are reviewed. Reviews, tests, or audits are completed by the organization to measure the effectiveness of the implemented controls and to confirm that all model cards are reviewed prior to deployment.

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.

Placement of this requirement in the HITRUST CSF [?]

  • Assessment domain: 14 Third-party Assurance
  • Control category: 05.0 – Organization of Information Security
  • Control reference: 05.i – Identification of Risks Related to External Parties

Specific to which parts of the overall AI system? [?]
AI platform layer:
  • The deployed AI model

Discussed in which authoritative AI security sources? [?]
  • Generative AI framework for HM Government
    2023, Central Digital and Data Office, UK Government
    • Where:
      • Using generative AI safely and responsibly > Ethics > Transparency and explainability > Practical recommendations > Bullet #4

Control functions against which AI security threats? [?]
Additional information
  • Q: When will this requirement included in an assessment? [?]
    • This requirement will always be added to HITRUST assessments which include the
      Security for AI systems regulatory factor.
    • No other assessment tailoring factors affect this requirement.

  • Q: Will this requirement be externally inheritable? [?] [?]
    • No. Implementing and/or configuring this requirement is the AI application provider’s sole responsibility.

AI security requirements communicated to AI providers

HITRUST CSF requirement statement [?] (14.05kAISecOrganizational.1)

Agreements between the organization and external commercial providers of AI system 
components and services clearly communicate the organization’s AI security requirements,
including agreements with providers of AI
(1) models,
(2) datasets,
(3) software packages,
(4) platforms and computing infrastructure,
(5) language model tools such as agents and plugins, and
(6) contracted AI system-related services (e.g., outsourced AI system development), as
applicable.

 

Evaluative elements in this requirement statement [?]
1. Agreements between the organization and external commercial providers of AI models 
clearly communicate the organization’s AI security requirements, as applicable.
2. Agreements between the organization and external commercial providers of AI 
datasets clearly communicate the organization’s AI security requirements, as applicable.
3. Agreements between the organization and external commercial providers of AI 
software packages clearly communicate the organization’s AI security requirements,
as applicable.
4. Agreements between the organization and external commercial providers of AI 
platforms and computing infrastructure clearly communicate the organization’s AI
security requirements, as applicable.
5. Agreements between the organization and external commercial providers of AI models 
clearly communicate the organization’s AI security requirements, as applicable.
6. Agreements between the organization and external commercial providers of contracted 
AI system-related services (e.g., outsourced AI system development) clearly communicate
the organization’s AI security requirements, as applicable.
Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample based test where possible for each evaluative element. Example test(s):
    • For example, review a sample of agreements between the organization and external commercial providers of AI system components and services to evidence these documents clearly communicate the organization’s AI security requirements. Further, confirm that agreements are in place with all providers of AI models, datasets, software packages, platforms and computing infrastructure, language model tools such as agents and plugins, and contracted AI system-related services (e.g., outsourced AI system development), as applicable.

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures indicate if agreements between the organization and external commercial providers of AI system components and services clearly communicate the organization’s AI security requirements. Reviews, tests, or audits are completed by the organization to measure the effectiveness of the implemented controls and to confirm that agreements are in place with all providers of AI models, datasets, software packages, platforms and computing infrastructure, language model tools such as agents and plugins, and contracted AI system-related services (e.g., outsourced AI system development), as applicable.

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.

Placement of this requirement in the HITRUST CSF [?]

  • Assessment domain: 14 Third-party Assurance
  • Control category: 05.0 – Organization of Information Security
  • Control reference: 05.k – Addressing Security in Third Party Agreements

Specific to which parts of the overall AI system? [?]
AI application layer:
  • AI plugins and agents
  • The deployed AI application (Considered in the associated HITRUST e1, i1, or r2 assessment)
  • The AI application’s supporting IT infrastructure (Considered in the associated HITRUST e1, i1, or r2 assessment)
AI platform layer:
  • The AI platform and associated APIs (Considered in the associated HITRUST e1, i1, or r2 assessment)
  • The deployed AI model
  • Model engineering environment and model pipeline
  • AI datasets and data pipelines
  • AI compute infrastructure (Considered in the associated HITRUST e1, i1, or r2 assessment)

Discussed in which authoritative AI security sources? [?]
  • Generative AI framework for HM Government
    2023, Central Digital and Data Office, UK Government
    • Where:
      • Building generative AI solutions > Buying generative AI > Specifying your requirements > Paragraph 1
      • Using generative AI safely and responsibly > Ethics > Accountability and responsibility > Practical recommendations > Bullet 3

Discussed in which commercial AI security sources? [?]
Control functions against which AI security threats? [?]
Additional information
  • Q: When will this requirement included in an assessment? [?]
    • This requirement will always be added to HITRUST assessments which include the
      Security for AI systems regulatory factor.
    • No other assessment tailoring factors affect this requirement.

  • Q: Will this requirement be externally inheritable? [?] [?]
    • No (joint responsibility). The AI application provider and its AI service providers (if used) are responsible for jointly performing this requirement outside of the AI system’s technology stack (e.g., through a jointly executed agreement / contract).
TOPIC: Model robustness
Data minimization or anonymization

HITRUST CSF requirement statement [?] (19.13jAISecOrganizational.1)

The organization reviews the data used to 
(1) train AI models,
(2) fine-tune AI models, and
(3) enhance AI prompts via RAG
to identify any data fields or records that can be omitted or anonymized (to prevent them
from potentially leaking) and takes action on findings.

 

Evaluative elements in this requirement statement [?]
1. The organization reviews the data used to train AI models to identify any data fields or records that can be removed or 
anonymized (to prevent them from potentially leaking) and takes action on findings.
2. The organization reviews the data used to fine-tune AI models to identify any data fields or records that can be removed or 
anonymized (to prevent them from potentially leaking) and takes action on findings.
3. The organization reviews the data used to enhance AI prompts via RAG to identify any data fields or records that can be 
removed or anonymized (to prevent them from potentially leaking) and takes action on findings.
Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample based test where possible for each evaluative element. Example test(s):
    • For example, evaluate the AI model to ensure the organization reviews the data used to train AI models, fine-tune AI models, and enhance AI prompts via RAG to identify any data fields or records that can be omitted or anonymized (to prevent them from potentially leaking) and takes action on findings.

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures indicate the percentage of the organization’s AI datasets which have undergone this review.

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.

Placement of this requirement in the HITRUST CSF [?]

  • Assessment domain: 19 Data Protection & Privacy
  • Control category: 13.0 – Privacy Practices
  • Control reference: 13.j – Data Minimization

Specific to which parts of the overall AI system? [?]
AI application layer:
  • Prompt enhancement via RAG, and associated RAG data sources
AI platform layer:
  • Model tuning and associated datasets
  • AI datasets and data pipelines

Discussed in which authoritative AI security sources? [?]
  • Generative AI framework for HM Government
    2023, Central Digital and Data Office, UK Government
    • Where: Using generative AI safely and responsibly > Data protection and privacy > Data minimization > Bullet 1

Discussed in which commercial AI security sources? [?]
Control functions against which AI security threats? [?]
Additional information
  • Q: When will this requirement included in an assessment? [?]
    • This requirement is included when confidential and/or sensitive data was used for model training, model tuning, and/or prompt enhancement via RAG for the assessment’s in-scope AI system.
    • However, this is requirement is not included when the in-scope AI system leverages a rule-based AI model only.
    • The Security for AI systems regulatory factor must also be present in the assessment.

  • Q: Will this requirement be externally inheritable? [?] [?]
    • Yes, fully. This requirement may be the sole responsibility of the AI platform provider and/or model creator. Or, depending on the AI system’s architecture, only evaluative elements that are the sole responsibility of the AI platform provider and/or model creator apply.
Limit output specificity and precision

HITRUST CSF requirement statement [?] (07.10mAISecOrganizational.3)

The
(1) precision, and
(2) specificity
of AI application outputs are reduced to limit an adversary's ability to extract information
from the model and/or optimize potential attacks (e.g., omit or round indication of precision
of confidence in the output so it cannot be used for optimization, limit specificity of
output class ontology).

 

Evaluative elements in this requirement statement [?]
1. The precision of AI system outputs is reduced (e.g., round indication 
of confidence in the output so it cannot be used for optimization) to limit an
adversary's ability to extract information from the model and/or optimize
potential attacks.
2. The specificity of AI system outputs is reduced (e.g., limit specificity of 
output class ontology) to limit an adversary's ability to extract information
from the model and/or optimize potential attacks.
Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample based test where possible for each evaluative element. Example test(s):
    • For example, review the AI model to ensure the precision and specificity of AI application outputs are reduced to limit an adversary’s ability to extract information from the model and/or optimize potential attacks.

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures indicate if AI application outputs are reduced to limit an adversary’s ability to extract information from the model and/or optimize potential attacks.

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.

Placement of this requirement in the HITRUST CSF [?]

  • Assessment domain: 07 Vulnerability Management
  • Control category: 10.0 – Information Systems Acquisition, Development, and Maintenance
  • Control reference: 10.m – Control of Technical Vulnerabilities
Specific to which parts of the overall AI system? [?]

  • AI application layer:
    • Application AI safety and security systems

Discussed in which authoritative AI security sources? [?]
  • Securing Machine Learning Algorithms
    2021, © European Union Agency for Cybersecurity (ENISA)
    • Where:
      • 4.1- Security Controls > Specific ML > Reduce the information given by the model

Control functions against which AI security threats? [?]
Additional information
  • Q: When will this requirement included in an assessment? [?]
    • This requirement is included when confidential and/or sensitive data was used for model training, model tuning, and/or prompt enhancement via RAG for the assessment’s in-scope AI systems.
    • This requirement is also included when the assessment’s in-scope AI system(s) leverage models with technical architectures that are confidential to the organization.
    • The Security for AI systems regulatory factor must also be present in the assessment.

  • Q: Will this requirement be externally inheritable? [?] [?]
    • No. Implementing and/or configuring this requirement is the AI application provider’s sole responsibility.
Additional Training Data Measures

HITRUST CSF requirement statement [?] (17.03bAISecOrganizational.3)

The organization evaluates the need to take additional measures against AI training data (e.g., 
adversarial training, using randomized smoothing techniques) to specifically ensure that the
machine learning-based AI models it produces are more resistant to evasion and poisoning
attacks. This evaluation is
(1) documented,
(2) performed regularly (at least semiannually) thereafter, and
(3) revisited when security incidents related to the AI system occur.
Additional measures deemed necessary as a result of this evaluation are
(4) implemented by the organization.

 

Evaluative elements in this requirement statement [?]
1. The organization documents an evaluation of the need to take additional measures against AI training 
data (e.g., adversarial training, using randomized smoothing techniques) to specifically ensure that the
machine learning-based AI models it produces are more resistant to evasion and poisoning attacks.
2. This evaluation is performed regularly (at least semiannually).
3. This evaluation is revisited when security incidents related to the AI system occur.
4. Additional measures deemed necessary as a result of this evaluation are implemented by the organization. 
Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample based test where possible for each evaluative element. Example test(s):
    • For example, inspect the documentation produced as a result of the evaluation described in this requirement and confirm that any measures deemed necessary as a result of the evaluation have been implemented. Further, evidence that the evaluation was revisited at the frequency described in the requirement.

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures indicate the percentage of the AI models produced by the organization subject to this evaluation of total.

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.


Placement of this requirement in the HITRUST CSF [?]

  • Assessment domain: 17 Risk Management
  • Control category: 03.0 – Risk Management
  • Control reference: 03.b – Performing Risk Assessments

Specific to which parts of the overall AI system? [?]

  • AI platform layer:
    • AI datasets and data pipelines

Discussed in which authoritative AI security sources? [?]
  • Securing Machine Learning Algorithms
    2021, © European Union Agency for Cybersecurity (ENISA)
    • Where:
      • 4.1- Security Controls > Specific ML > Apply modifications on inputs
      • 4.1- Security Controls > Specific ML > Add some adversarial examples to the training dataset

Discussed in which commercial AI security sources? [?]
  • Snowflake AI Security Framework
    2024, © Snowflake Inc.
    • Where:
      • Lack of explainability / transparency > Mitigations > Adversarial training
      • Prompt injection > Mitigations > Adversarial training
      • Indirect prompt injection > Mitigations > Bullet 2
      • Adversarial samples > Mitigations > Robust model training
      • Sponge samples > Mitigations > Adversarial training
      • Fuzzing > Mitigations > Adversarial training
      • Model poisoning > Mitigations > Bullet 4
      • Training data poisoning > Mitigations > Robust model training

Control functions against which AI security threats? [?]
Additional information
  • Q: When will this requirement included in an assessment? [?]
    • The Security for AI systems regulatory factor must also be present in the assessment.
    • However, this is only included when non-generative machine learning models are in-scope, as these are the only types of models this requirement applies to.

  • Q: Will this requirement be externally inheritable? [?] [?]
    • Yes, fully. This requirement may be the sole responsibility of the AI model creator. Or, depending on the AI system’s architecture, only evaluative elements that are the sole responsibility of the AI model creator apply.
TOPIC: Access to the AI system
Limit the release of technical info about the AI system

HITRUST CSF requirement statement [?] (19.09zAISecOrganizational.1)

The organization limits the release of technical AI project details, including specific information on the
(1) technical architecture of the overall AI system;
(2) datasets used for training, testing, validating, and tuning of confidential and/or closed-source AI models;
(3) datasets used for prompt augmentation via RAG (if performed);
(4) algorithm(s) used to create confidential and/or closed-source AI models;
(5) architecture of confidential and/or closed-source AI models;
(6) language model tools used such as agents and plugins (if used);
(7) safety and security checkpoints built into the AI system; and
(8) information on teams developing and supporting the AI system.

 

Evaluative elements in this requirement statement [?]
1. The organization limits the release of specific information on the technical architecture of the overall AI system.
2. The organization limits the release of specific information on the datasets used for training, testing, validating, and tuning of confidential and/or closed-source AI models.
3. The organization limits the release of specific information on the datasets used for prompt augmentation via RAG (if performed).
4. The organization limits the release of specific information on the algorithm(s) used to create confidential and/or closed-source AI models.
5. The organization limits the release of specific information on the architecture of confidential and/or closed-source AI models.
6. The organization limits the release of specific information on the language model tools used such as agents and plugins (if used).
7. The organization limits the release of specific information on the safety and security checkpoints built into the AI system.
8. The organization limits the release of specific information on the information on teams developing and supporting the AI system.
Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample based test where possible for each evaluative element. Example test(s):
    • For example, examine released documentation on the AI system to ensure the organization restricts the release of the technical AI project details outlined in the requirement statement.

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures could indicate the amount of AI systems deployed by the company, of total, for which technical AI details have or have not been released in accordance with this requirement statement.

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.

Placement of this requirement in the HITRUST CSF [?]

  • Assessment domain: 19 Data Protection & Privacy
  • Control category: 09.0 – Communications and Operations Management
  • Control reference: 09.z – Publicly Available Information

Specific to which parts of the overall AI system? [?]
  • AI application layer:
    • AI plugins and agents
    • Application AI safety and security systems
  • AI platform layer:
    • Model tuning and associated datasets
    • The deployed AI model
    • Model engineering environment and model pipeline
    • AI datasets and data pipelines

Discussed in which authoritative AI security sources? [?]
  • Securing Machine Learning Algorithms
    2021, © European Union Agency for Cybersecurity (ENISA)
    • Where:
      • 4.1- Security Controls > Specific ML > Reduce the available information about the model

Control functions against which AI security threats? [?]
Additional information
  • Q: When will this requirement included in an assessment? [?]
    • This requirement is only included when the assessment’s in-scope AI system(s) leverage models with technical architectures that are confidential to the organization.
    • The Security for AI systems regulatory factor must also be present in the assessment.

  • Q: Will this requirement be externally inheritable? [?] [?]
    • No (dual responsibility). The AI application provider and its AI service providers (if used) are responsible for independently performing this requirement outside of the AI system’s technology stack.
Model Rate Limiting / Throttling

HITRUST CSF requirement statement [?] (07.10mAISecOrganizational.5)

To prevent against successful denial of AI service attacks and to hinder experimentation for AI attacks, the information system limits / throttles the 
(1) total number and
(2) rate
of API calls that a user can make to the AI model in a given time period.

 

Evaluative elements in this requirement statement [?]
1. To prevent against successful denial of AI service attacks and to hinder experimentation for AI attacks, the information system limits / throttles the total number of API calls that a user 
can make to the AI model in a given time period.
2. To prevent against successful denial of AI service attacks and to hinder experimentation for AI attacks, the information system limits / throttles the rate of API calls that a user can make 
to the AI model in a given time period.
Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample based test where possible for each evaluative element. Example test(s):
    • For example, review the AI model to ensure the system limits or throttles the total number and rate of API calls that a user can make to the AI model within a given time period.

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures indicate if the system limits or throttles the total number and rate of API calls that a user can make to the AI model within a given time period.

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.

Placement of this requirement in the HITRUST CSF [?]

  • Assessment domain: 07 Vulnerability Management
  • Control category: 10.0 – Information Systems Acquisition, Development, and Maintenance
  • Control reference: 10.m – Control of Technical Vulnerabilities

Specific to which parts of the overall AI system? [?]
AI application layer:
  • Application AI safety and security systems
AI platform layer:
  • Model safety and security systems
Discussed in which authoritative AI security sources? [?]
  • OWASP 2023 Top 10 for LLM Applications
    Oct. 2023, © The OWASP Foundation
    • Where:
      • LLM04: Model denial of service > Prevention and mitigation strategies > Bullet #3
      • LLM08: Excessive agency > Prevention and mitigation strategies > Bullet #9
      • LLM10: Model theft > Prevention and mitigation strategies > Bullet #6

Discussed in which commercial AI security sources? [?]
  • Databricks AI Security Framework
    Sept. 2024, © Databricks
    • Where:
      • Control DASF 32: Streamline the usage and management of various large language model (LLM) providers

Control functions against which AI security threats? [?]
Additional information
  • Q: When will this requirement included in an assessment? [?]
    • This requirement will always be added to HITRUST assessments which include the
      Security for AI systems regulatory factor.
    • No other assessment tailoring factors affect this requirement.

  • Q: Will this requirement be externally inheritable? [?] [?]
    • Yes, partially. This may be a responsibility shared between an AI application provider and their AI platform provider (if used), performed independently on separate layers/components of the overall AI system.
GenAI model least privilege

HITRUST CSF requirement statement [?] (11.01cAISecSystem.5)

The organization restricts the
(1) data (e.g., databases, document repositories, embeddings) access and
(2) capabilities (e.g., messaging) granted to generative AI models (including through
language model tools such as agents and plugins) following the least privilege principle.
This access is controlled in accordance with the organization’s policies regarding
(3) access management (including approvals, revocations, periodic access reviews), and
(4) authentication.

 

Evaluative elements in this requirement statement [?]
1. The organization restricts the data (e.g., databases, document repositories, embeddings) access 
granted to generative AI models (including through language model tools such as agents
and plugins) following the least privilege principle.
2. The organization restricts the capabilities (e.g., messaging) granted to generative AI 
models (including through language model tools such as agents and plugins) following the
least privilege principle.
3. This access is controlled in accordance with the organization’s policies regarding 
access management (including approvals, revocations, periodic access reviews).
4. This access is controlled in accordance with the organization’s policies regarding 
authentication.
Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample based test where possible for each evaluative element. Example test(s):
    • For example, review the AI model to ensure the organization restricts the data (e.g., databases, document repositories, embeddings) access and capabilities (e.g., messaging) granted to generative AI models (including through language model tools such as agents and plugins) following the least privilege principle. Further, confirm this access is controlled in accordance with the organization’s policies regarding access management (including approvals, revocations, periodic access reviews) and authentication.

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures indicate if the organization restricts the data (e.g., databases, document repositories, embeddings) access and capabilities (e.g., messaging) granted to generative AI models (including through language model tools such as agents and plugins) following the least privilege principle. Reviews, tests, or audits are completed by the organization to measure the effectiveness of the implemented controls and to confirm that all access is controlled in accordance with the organization’s policies regarding access management (including approvals, revocations, periodic access reviews) and authentication.

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.

Placement of this requirement in the HITRUST CSF [?]

  • Assessment domain: 11 Access Control
  • Control category: 01.0 – Access Control
  • Control reference: 01.c – Privilege Management

Specific to which parts of the overall AI system? [?]
AI application layer:
  • AI plugins and agents
  • The deployed AI application (Considered in the underlying HITRUST e1, i1, or r2 assessment)

Discussed in which authoritative AI security sources? [?]
  • OWASP 2023 Top 10 for LLM Applications
    Oct. 2023, © The OWASP Foundation
    • Where:
      • LLM01: Prompt injection > Prevention and mitigation strategies > Bullet #1
      • LLM07: Insecure plugin design > Prevention and mitigation strategies > Bullet #4
      • LLM08: Excessive agency > Prevention and mitigation strategies > Bullet #1
      • LLM08: Excessive agency > Prevention and mitigation strategies > Bullet #2
      • LLM08: Excessive agency > Prevention and mitigation strategies > Bullet #4
      • LLM10: Model theft > Prevention and mitigation strategies > Bullet #2

  • Generative AI framework for HM Government
    2023, Central Digital and Data Office, UK Government
    • Where:
      • Building generative AI solutions > Building the solution > Patterns > Practical recommendations > Bullet #5

Discussed in which commercial AI security sources? [?]
  • Databricks AI Security Framework
    Sept. 2024, © Databricks
    • Where:
      • DASF 24: Control access to models and model assets
      • DASF 31: Secure model serving endpoints
      • DASF 46: Store and retrieve embeddings securely

Control functions against which AI security threats? [?]
Additional information
  • Q: When will this requirement included in an assessment? [?]
    • This requirement is included when the assessment’s in-scope AI system leverages a generative AI model.
    • The Security for AI systems regulatory factor must also be present in the assessment.

  • Q: Will this requirement be externally inheritable? [?] [?]
    • No. Implementing and/or configuring this requirement is the AI application provider’s sole responsibility.
Restrict access to data used for AI

HITRUST CSF requirement statement [?] (11.01cAISecSystem.6)

The organization restricts all access to the data used to 
(1) train, test, and validate AI models;
(2) fine-tune AI models; and
enhance AI prompts via RAG (both the
(3) original and
(4) vectorized formats stored as embeddings, if used)
following the least privilege principle.
This access is controlled in accordance with the organization’s policies regarding
(5) access management (including approvals, revocations, periodic access reviews), and
(6) authentication (which calls for multi-factor authentication or a similar level of protection).

 

Evaluative elements in this requirement statement [?]
1. The organization restricts all access to the data used to train, test, and validate AI 
models following the least privilege principle.
2. The organization restricts all access to the data used to fine-tune AI models following 
the least privilege principle.
3. The organization restricts all access to the original (non-vectorized) data used to enhance AI prompts
via RAG following the least privilege principle, if applicable.
4. The organization restricts all access to the embeddings data used to enhance AI prompts via RAG
following the least privilege principle, if applicable.
5. This access is controlled in accordance with the organization’s policies regarding 
access management (including approvals, revocations, periodic access reviews).
6. This access is controlled in accordance with the organization’s policies regarding 
authentication (which calls for multi-factor authentication or a similar level of protection).
Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample based test where possible for each evaluative element. Example test(s):
    • For example, review the AI system to ensure The organization restricts all access to the data used to train, test, and validate AI models; tune AI models; and enhance AI prompts via RAG, if applicable, following the least privilege principle. Further, confirm this access is controlled in accordance with the organization’s policies regarding access management (including approvals, revocations, periodic access reviews) and authentication.

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures indicate if the organization restricts all access to the data used to train, test, and validate AI models; tune AI models; and enhance AI prompts via RAG, if applicable, following the least privilege principle. Reviews, tests, or audits are completed by the organization to measure the effectiveness of the implemented controls and to confirm that all access is controlled in accordance with the organization’s policies regarding access management (including approvals, revocations, periodic access reviews) and authentication.

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.

Placement of this requirement in the HITRUST CSF [?]

  • Assessment domain: 11 Access Control
  • Control category: 01.0 – Access Control
  • Control reference: 01.c – Privilege Management

Specific to which parts of the overall AI system? [?]
AI application layer:
  • Prompt enhancement via RAG, and associated RAG data sources
AI platform layer:
  • Model tuning and associated datasets
  • AI datasets and data pipelines
Discussed in which authoritative AI security sources? [?]
  • LLM AI Cybersecurity & Governance Checklist
    Feb. 2024, © The OWASP Foundation
    • Where:
      • 3. Checklist > 3.9. Using or implementing large language model solutions > Bullet #2
      • 3. Checklist > 3.9. Using or implementing large language model solutions > Bullet #4

  • Securing Machine Learning Algorithms
    2021, © European Union Agency for Cybersecurity (ENISA)
    • Where:
      • 4.1- Security Controls > Organizational > Apply a RBAC model, respecting the least privilege principle
      • 4.1- Security Controls > Technical > Ensure appropriate protection is deployed for test environments

Discussed in which commercial AI security sources? [?]
  • Databricks AI Security Framework
    Sept. 2024, © Databricks
    • Where:
      • Control DASF 1: SSO with IdP and MFA
      • Control DASF 2: Sync users and groups
      • Control DASF 5: Control access to data and other objects
      • Control DASF 16: Secure model features
      • Control DASF 43: Use access control lists
      • Control DASF 57: Use attribute-based access controls (ABAC)

  • Google Secure AI Framework
    June 2023, © Google
    • Where:
      • Step 4. Apply the six core elements of the SAIF > Expand strong security foundations to the AI ecosystem > Prepare to store and track supply chain assets, code, and training data

  • HiddenLayer’s 2024 AI Threat Landscape Report
    2024, © HiddenLayer
    • Where:
      • Part 4: Predictions and recommendations > 3. Data security and privacy > Bullet #1

  • Snowflake AI Security Framework
    2024, © Snowflake Inc.
    • Where:
      • Training data leakage > Mitigations > Access controls

Control functions against which AI security threats? [?]
Additional information
  • Q: When will this requirement included in an assessment? [?]
    • This requirement is included when the assessment’s in-scope AI system(s) leverage data-driven AI models (e.g., non-generative machine learning models, generative AI models).
    • The Security for AI systems regulatory factor must also be present in the assessment.

  • Q: Will this requirement be externally inheritable? [?] [?]
    • Yes, fully. This requirement may be the sole responsibility of the AI model creator. Or, depending on the AI system’s architecture, only evaluative elements that are the sole responsibility of the AI model creator apply.
Restrict access to AI models

HITRUST CSF requirement statement [?] (11.01cAISecSystem.7)

The organization 
(1) restricts the ability to access and modify deployed AI models following the least privilege principle.
This access is controlled in accordance with the organization’s policies regarding
(2) access management (including approvals, revocations, periodic access reviews), and
(3) authentication (which calls for multi-factor authentication or a similar level of protection).

 

Evaluative elements in this requirement statement [?]
1. The organization restricts the ability to access and modify deployed AI models following the 
least privilege principle.
2. This access is controlled in accordance with the organization’s policies regarding 
access management (including approvals, revocations, periodic access reviews).
3. This access is controlled in accordance with the organization’s policies regarding 
authentication (which calls for multi-factor authentication or a similar level of protection).
Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample based test where possible for each evaluative element. Example test(s):
    • For example, review the AI system to ensure the organization restricts the ability to access and modify deployed AI models (e.g., residing in file formats such as .pkl, .pth, .hdf5, .gguf, .llamafile) following the least privilege principle. Further, confirm this access is controlled in accordance with the organization’s policies regarding access management (including approvals, revocations, periodic access reviews) and authentication.

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures indicate if the organization restricts the ability to access and modify deployed AI models (e.g., residing in file formats such as .pkl, .pth, .hdf5, .gguf, .llamafile) following the least privilege principle. Reviews, tests, or audits are completed by the organization to measure the effectiveness of the implemented controls and to confirm that access is controlled in accordance with the organization’s policies regarding access management (including approvals, revocations, periodic access reviews) and authentication.

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.

Placement of this requirement in the HITRUST CSF [?]

  • Assessment domain: 11 Access Control
  • Control category: 01.0 – Access Control
  • Control reference: 01.c – Privilege Management

Specific to which parts of the overall AI system? [?]
AI platform layer:
  • Model safety and security systems
  • The deployed AI model

Discussed in which authoritative AI security sources? [?]
  • LLM AI Cybersecurity & Governance Checklist
    Feb. 2024, © The OWASP Foundation
    • Where:
      • 3. Checklist > 3.9. Using or implementing large language model solutions > Bullet #3
      • 3. Checklist > 3.9. Using or implementing large language model solutions > Bullet #4

  • Guidelines for Secure AI System Development
    Nov. 2023, Cybersecurity & Infrastructure Security Agency (CISA)
    • Where:
      • 3. Secure deployment > Protect your model continuously
      • 3. Secure deployment > Secure your infrastructure

  • Securing Machine Learning Algorithms
    2021, © European Union Agency for Cybersecurity (ENISA)
    • Where:
      • 4.1- Security Controls > Organizational > Apply a RBAC model, respecting the least privilege principle

Discussed in which commercial AI security sources? [?]
  • Databricks AI Security Framework
    Sept. 2024, © Databricks
    • Where:
      • Control DASF 1: SSO with IdP and MFA
      • Control DASF 2: Sync users and groups
      • Control DASF 24: Control access to models and model assets
      • Control DASF 34: Run models in multiple layers of isolation
      • Control DASF 43: Use access control lists

  • Google Secure AI Framework
    June 2023, © Google
    • Where: Step 4. Apply the six core elements of the SAIF > Expand strong security foundations to the AI ecosystem > Prepare to store and track supply chain assets, code, and training data

  • Snowflake AI Security Framework
    2024, © Snowflake Inc.
    • Where:
      • Backdooring models (insider attacks) > Mitigations > Access control and monitoring
      • Model stealing > Mitigations > Secure model deployment
      • Model stealing > Mitigations > Access control measures

Control functions against which AI security threats? [?]
Additional information
  • Q: When will this requirement included in an assessment? [?]
    • This requirement is included when the assessment’s in-scope AI system(s) leverage data-driven AI models (e.g., non-generative machine learning models, generative AI models).
    • The Security for AI systems regulatory factor must also be present in the assessment.

  • Q: Will this requirement be externally inheritable? [?] [?]
    • Yes, fully. This requirement may be the sole responsibility of the AI platform provider. Or, depending on the AI system’s architecture, only evaluative elements that are the sole responsibility of the AI platform provider apply.
Restrict access to interact with the AI model

HITRUST CSF requirement statement [?] (11.01cAISecSystem.8)

The organization restricts the ability to interact with the production AI model through
(1) APIs,
(2) the AI application, and
(3) language model tools such as agents and plugins (if used).
This access is controlled in accordance with the organization’s policies regarding
(4) access management (including approvals, revocations, periodic access reviews), and
(5) authentication.

 

Evaluative elements in this requirement statement [?]
1. The organization restricts the ability to interact with the production AI model through 
APIs following the least privilege principle.
2. The organization restricts the ability to interact with the production AI model through 
the AI application following the least privilege principle.
3. The organization restricts the ability to interact with the production AI model through 
language model tools such as agents and plugins following the least privilege principle
(if used).
4. This access is controlled in accordance with the organization’s policies regarding 
access management (including approvals, revocations, periodic access reviews).
5. This access is controlled in accordance with the organization’s policies regarding 
authentication.
Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample based test where possible for each evaluative element. Example test(s):
    • For example, review the production AI model security configurations and confirm interaction abilities are restricted as defined in the requirement statement.

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures indicate if the production AI model interaction abilities are restricted. Reviews, tests, or audits are completed by the organization to measure the effectiveness of the implemented controls and to confirm that all interaction abilities are restricted as defined in the requirement statement.

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.

Placement of this requirement in the HITRUST CSF [?]

  • Assessment domain: 11 Access Control
  • Control category: 01.0 – Access Control
  • Control reference: 01.c – Privilege Management

Specific to which parts of the overall AI system? [?]
AI application layer:
  • AI plugins and agents
  • Application AI safety and security systems
  • The deployed AI application (Considered in the underlying HITRUST e1, i1, or r2 assessment)
AI platform layer
  • The AI platform and associated APIs (Considered in the underlying HITRUST e1, i1, or r2 assessment)
  • Model safety and security systems

Discussed in which authoritative AI security sources? [?]
  • OWASP 2023 Top 10 for LLM Applications
    Oct. 2023, © The OWASP Foundation
    • Where:
      • LLM07: Insecure plugin design > Prevention and mitigation strategies > Bullet #5
      • LLM10: Model theft > Prevention and mitigation strategies > Bullet #1

  • Guidelines for Secure AI System Development
    Nov. 2023, Cybersecurity & Infrastructure Security Agency (CISA)
    • Where:
      • 1. Secure design > Design your system for security as well as functionality and performance
      • 3. Secure deployment > Protect your model continuously

  • Securing Machine Learning Algorithms
    2021, © European Union Agency for Cybersecurity (ENISA)
    • Where:
      • 4.1- Security Controls > Organizational > Apply a RBAC model, respecting the least privilege principle

Discussed in which commercial AI security sources? [?]
  • Databricks AI Security Framework
    Sept. 2024, © Databricks
    • Where:
      • Control DASF 31: Secure model serving endpoints

  • Google Secure AI Framework
    June 2023, © Google
    • Where:
      • Step 4. Apply the six core elements of the SAIF > Expand strong security foundations to the AI ecosystem > Prepare to store and track supply chain assets, code, and training data

  • Snowflake AI Security Framework
    2024, © Snowflake Inc.
    • Where:
      • Backdooring models (insider attacks) > Mitigations > Access control and monitoring
      • Model inversion > Mitigations > Bullets 1 & 2
      • Exposure of sensitive inferential inputs > Mitigations > Implementing authentication mechanisms

Control functions against which AI security threats? [?]
Additional information
  • Q: When will this requirement included in an assessment? [?]
    • This requirement will always be added to HITRUST assessment which include the
      Security for AI systems regulatory factor.
    • No other assessment tailoring factors affect this requirement.

  • Q: Will this requirement be externally inheritable? [?] [?]
    • Yes, partially. This may be a responsibility shared between an AI application provider and their AI platform provider (if used), performed independently on separate layers/components of the overall AI system.
Restrict access to the AI engineering environment and AI code

HITRUST CSF requirement statement [?] (11.01cAISecSystem.9)

The organization restricts all access to 
(1) AI engineering environments;
(2) code used to create, train, and/or deploy AI models; and
(3) code of language model tools such as agents and plugins (if used)
following the least privilege principle.
This access is controlled in accordance with the organization’s policies regarding
(4) access management (including approvals, revocations, periodic access reviews), and
(5) authentication (which calls for multi-factor authentication or a similar level of protection).

 

Evaluative elements in this requirement statement [?]
1. The organization restricts all access to AI engineering environments following the 
least privilege principle.
2. The organization restricts all access to code used to create, train, and/or deploy AI 
models following the least privilege principle.
3. The organization restricts all access to code of language model tools such as agents 
and plugins (if used) following the least privilege principle.
4. This access is controlled in accordance with the organization’s policies regarding 
access management (including approvals, revocations, periodic access reviews).
5. This access is controlled in accordance with the organization’s policies regarding 
authentication (which calls for multi-factor authentication or a similar level of protection).
Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample based test where possible for each evaluative element. Example test(s):
    • For example, review the AI system to ensure the organization restricts all access to AI engineering environments; code used to create, train, and/or deploy AI models; and code of language model tools such as agents and plugins (if used) following the least privilege principle. Further, confirm this access is controlled in accordance with the organization’s policies regarding access management (including approvals, revocations, periodic access reviews) and authentication.

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures indicate if the organization restricts all access to AI engineering environments; code used to create, train, and/or deploy AI models; and code of language model tools such as agents and plugins (if used) following the least privilege principle. Reviews, tests, or audits are completed by the organization to measure the effectiveness of the implemented controls and to confirm that access is controlled in accordance with the organization’s policies regarding access management (including approvals, revocations, periodic access reviews) and authentication.

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.

Placement of this requirement in the HITRUST CSF [?]

  • Assessment domain: 11 Access Control
  • Control category: 01.0 – Access Control
  • Control reference: 01.c – Privilege Management

Specific to which parts of the overall AI system? [?]
AI application layer:
  • AI plugins and agents
  • The deployed AI application (Considered in the underlying HITRUST e1, i1, or r2 assessment)
AI platform layer
  • The AI platform and associated APIs (Considered in the underlying HITRUST e1, i1, or r2 assessment)
  • Model engineering environment and model pipeline
Discussed in which authoritative AI security sources? [?]
Discussed in which commercial AI security sources? [?]
  • Databricks AI Security Framework
    Sept. 2024, © Databricks
    • Where:
      • Control DASF 24: Control access to models and model assets
      • Control DASF 30: Encrypt models
      • Control DASF 33: Manage credentials securely
      • Control DASF 43: Use access control lists

  • Google Secure AI Framework
    June 2023, © Google
    • Where:
      • Step 4. Apply the six core elements of the SAIF > Expand strong security foundations to the AI ecosystem > Prepare to store and track supply chain assets, code, and training data

  • Snowflake AI Security Framework
    2024, © Snowflake Inc.
    • Where:
      • Attacks on the infrastructure hosting AI services > Mitigations > Least privilege access control

Control functions against which AI security threats? [?]
Additional information
  • Q: When will this requirement included in an assessment? [?]
    • This requirement will always be added to HITRUST assessments which include the
      Security for AI systems regulatory factor.
    • No other assessment tailoring factors affect this requirement.

  • Q: Will this requirement be externally inheritable? [?] [?]
    • Yes, partially. This may be a responsibility shared between an AI application provider and their AI platform provider (if used), performed independently on separate layers/components of the overall AI system.
TOPIC: Encryption of AI assets
Encrypt traffic to and from the AI model

HITRUST CSF requirement statement [?] (09.09sAISecOrganizational.1)

Communication channels between the AI model and the interface responsible for 
displaying the AI model's outputs (e.g., the AI application interface) are
(1) encrypted using secure protocols (e.g., TLS).

 

Evaluative elements in this requirement statement [?]
1. Communication channels between the AI model and the interface responsible for 
displaying the AI model outputs (e.g., the AI application interface) are encrypted using
secure protocols (e.g., TLS).
Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample based test where possible for each evaluative element. Example test(s):
    • For example, review configurations to confirm communication channel(s) between the AI model and the interface responsible for displaying the AI model’s outputs (e.g., the AI application interface) are encrypted using secure protocols (e.g., TLS).

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures indicate the number of communication channels between AI models and AI application interfaces are encrypted of the total amount of such interfaces. Reviews, tests, or audits are completed by the organization to measure the effectiveness of the implemented controls and to confirm that all communication channels on the AI model are encrypted using secure protocols (e.g., TLS).

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.

Placement of this requirement in the HITRUST CSF [?]

  • Assessment domain: 09 Transmission Protection
  • Control category: 09.0 – Communications and Operations Management
  • Control reference: 09.s Information Exchange Policies and Procedures

Specific to which parts of the overall AI system? [?]
AI platform layer
  • The AI platform and associated APIs (Considered in the underlying HITRUST e1, i1, or r2 assessment)
Discussed in which authoritative AI security sources? [?]

  • Securing Machine Learning Algorithms
    2021, © European Union Agency for Cybersecurity (ENISA)
    • Where:
      • 4.1- Security Controls > Organizational > Ensure ML applications comply with data security requirements

Discussed in which commercial AI security sources? [?]
  • Databricks AI Security Framework
    Sept. 2024, © Databricks
    • Where:
      • Control DASF 9: Encrypt data in transit
      • Control DASF 46: Store and retrieve embeddings securely

  • Snowflake AI Security Framework
    2024, © Snowflake Inc.
    • Where:
      • Model stealing > Mitigations > Secure model deployment
      • Multitenancy in ML environments > Mitigations > Data encryption
      • Exposure of sensitive inferential inputs > Mitigations > Proper configuration of encryption in transit
      • Exposure of sensitive inferential inputs > Mitigations > Secure communication channels
      • Attacks on the infrastructure hosting AI services > Mitigations > Data encryption

Control functions against which AI security threats? [?]
Additional information
  • Q: When will this requirement included in an assessment? [?]
    • This requirement will always be added to HITRUST assessments which include the
      Security for AI systems regulatory factor.
    • No other assessment tailoring factors affect this requirement.

  • Q: Will this requirement be externally inheritable? [?] [?]
    • Yes, fully. This requirement may be the sole responsibility of the AI application provider.
Encrypt AI assets at rest

HITRUST CSF requirement statement [?] (11.01vAISecSystem.1)

The organization encrypts at rest
(1) the datasets used to train, test, and validate AI models;
(2) the data used to fine-tune AI models;
(3) embeddings (if used); and
(4) AI models.
This is performed in observance of the organization’s encryption policies relating to
(5) encryption strength and
(6) key management.

 

Evaluative elements in this requirement statement [?]
1. The organization encrypts AI training, testing, and validation data at rest.
2. The organization encrypts AI fine-tuning data at rest.
3. The organization encrypts embeddings at rest (if used).
4. The organization encrypts AI models at rest.
5. This is performed in observance of the organization’s encryption policies relating to 
encryption strength.
6. This is performed in observance of the organization’s encryption policies relating to 
key management.
Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample based test where possible for each evaluative element. Example test(s):
    • For example, review configurations to ensure the organization encrypts at rest the datasets used to train, test, and validate AI models; the data used to tune AI models; embeddings (if used); and AI models. Further, confirm this is performed in observance of the organization’s encryption policies relating to encryption strength and key management.

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures indicate if the organization encrypts at rest the datasets used to train, test, and validate AI models; the data used to tune AI models; embeddings (if used); and AI models. Reviews, tests, or audits are completed by the organization to measure the effectiveness of the implemented controls and to confirm that encryption is deployed in observance of the organization’s encryption policies relating to encryption strength and key management.

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.

Placement of this requirement in the HITRUST CSF [?]

  • Assessment domain: 11 Access Control
  • Control category: 01.0 – Access Control
  • Control reference: 01.v – Information Access Restriction

Specific to which parts of the overall AI system? [?]
AI platform layer
  • Model tuning and associated datasets
  • The deployed AI model
  • AI datasets and data pipelines
Discussed in which authoritative AI security sources? [?]
  • Securing Machine Learning Algorithms
    2021, © European Union Agency for Cybersecurity (ENISA)
    • Where:
      • 4.1- Security Controls > Organizational > Ensure ML applications comply with data security requirements

Discussed in which commercial AI security sources? [?]
  • Databricks AI Security Framework
    Sept. 2024, © Databricks
    • Where:
      • Control DASF 8: Encrypt data at rest
      • Control DASF 30: Encrypt models
      • Control DASF 46: Store and retrieve embeddings securely

  • Snowflake AI Security Framework
    2024, © Snowflake Inc.
    • Where:
      • Multitenancy in ML environments > Mitigations > Data encryption
      • Attacks on the infrastructure hosting AI services > Mitigations > Data encryption

Control functions against which AI security threats? [?]
Additional information
  • Q: When will this requirement included in an assessment? [?]
    • This requirement is included when confidential and/or sensitive data was used for model training, model tuning, and/or prompt enhancement via RAG for the assessment’s in-scope AI systems.
    • This requirement is also included when the assessment’s in-scope AI system(s) leverage models with technical architectures that are confidential to the organization.
    • The Security for AI systems regulatory factor must also be present in the assessment.

  • Q: Will this requirement be externally inheritable? [?] [?]
    • Yes, fully. This requirement may be the sole responsibility of the AI platform provider and/or model creator. Or, depending on the AI system’s architecture, only evaluative elements that are the sole responsibility of the AI platform provider and/or model creator apply.
TOPIC: AI system logging and monitoring
Log AI system inputs and outputs

HITRUST CSF requirement statement [?] (12.09abAISecSystem.6)

The AI system logs all inputs (prompts, queries, inference requests) to and outputs 
(inferences, responses, conclusions) from the AI model, including
(1) the exact input (e.g., the prompt, the API call),
(2) the date and time of the input,
(3) the user account making the request,
(4) where the request originated,
(5) the exact output provided, and
(6) the version of the model used.
AI system logs are
(7) managed (i.e., retained, protected, and sanitized) in accordance with the organization’s
policy requirements.

 

Evaluative elements in this requirement statement [?]
1. The AI system logs the exact input (e.g., the prompt, the API call) to the AI model.
2. The AI system logs the date and time of the input to the AI model.
3. The AI system logs the user account making the request of the AI model.
4. The AI system logs where the request originated.
5. The AI system logs the exact output provided by the AI model.
6. The AI system logs the version of the model providing the output.
7. AI system logs are managed (i.e., retained, protected, and sanitized) in accordance 
with the organization’s policy requirements.
Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample based test where possible for each evaluative element. Example test(s):
    • For example, review the AI system to ensure the organization logs all inputs (prompts, queries, inference requests) to and outputs (inferences, responses, conclusions) from the AI model, including the exact input, the date and time of the input, the user account making the request, where the request originated, the exact output provided, and the version of the model used. Further, confirm the AI system logs are managed (i.e., retained, protected, and sanitized) in accordance with the organization’s policy requirements.

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures indicate if the organization logs all inputs (prompts, queries, inference requests) to and outputs (inferences, responses, conclusions) from the AI model, including the exact input, the date and time of the input, the user account making the request, where the request originated, the exact output provided, and the version of the model used. Reviews, tests, or audits are completed by the organization to measure the effectiveness of the implemented controls and to confirm the AI system logs are managed (i.e., retained, protected, and sanitized) in accordance with the organization’s policy requirements.

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.

Placement of this requirement in the HITRUST CSF [?]

  • Assessment domain: 12 Audit Logging & Monitoring
  • Control category: 09.0 – Communications and Operations Management
  • Control reference: 09.aa Audit Logging

Specific to which parts of the overall AI system? [?]
AI application layer:
  • Application AI safety and security systems
AI platform layer
  • Model safety and security systems
Discussed in which authoritative AI security sources? [?]
  • Generative AI framework for HM Government
    2023, Central Digital and Data Office, UK Government
    • Where:
      • Building generative AI solutions > Building the solution > Data Management > Bullet 3
      • Building generative AI solutions > Building the solution > Testing generative AI solutions > Bullet 3
      • Building generative AI solutions > Building the solution > Data Management > Bullet 4

Discussed in which commercial AI security sources? [?]
  • Snowflake AI Security Framework
    2024, © Snowflake Inc.
    • Where:
      • Model inversion > Mitigations > Bullet 5
      • Attacks on the infrastructure hosting AI services > Mitigations > Continuous monitoring and logging

Control functions against which AI security threats? [?]
Additional information
  • Q: When will this requirement included in an assessment? [?]
    • This requirement will always be added to HITRUST assessments which include the
      Security for AI systems regulatory factor.
    • No other assessment tailoring factors affect this requirement.

  • Q: Will this requirement be externally inheritable? [?] [?]
    • Yes, partially. This may be a responsibility shared between an AI application provider and their AI platform provider (if used), performed independently on separate layers/components of the overall AI system.
Monitor AI system inputs and outputs

HITRUST CSF requirement statement [?] (12.09abAISecSystem.4)

The organization performs monitoring, on at least a monthly basis, of the
(1) inputs (prompts, queries, inference requests) to and
(2) outputs (inferences, responses, conclusions) from
the AI model for anomalies indicative of attacks or compromise and to ensure that filters
and other guardrails are operating as expected.

 

Evaluative elements in this requirement statement [?]
1. The organization performs monitoring, on at least a monthly basis, of the inputs
(prompts, queries, inference requests) to AI model for anomalies indicative of attacks or
compromise and to ensure that input filters are operating as expected.
2. The organization performs monitoring, on at least a monthly basis, of the outputs 
(inferences, responses, conclusions) from the AI model for anomalies indicative of attacks
or compromise and to ensure that output filters are operating as expected.
Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample based test where possible for each evaluative element. Example test(s):
    • For example, review the AI system to ensure the organization performs monitoring, on at least a monthly basis, of the inputs (prompts, queries, inference requests) to and outputs (inferences, responses, conclusions) from the AI model for anomalies indicative of attacks or compromise and to ensure that filters and other guardrails are operating as expected.

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures indicate if the organization performs monitoring, on at least a monthly basis, of the inputs (prompts, queries, inference requests) to and outputs (inferences, responses, conclusions) from the AI model for anomalies indicative of attacks or compromise and to ensure that filters and other guardrails are operating as expected.

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.

Placement of this requirement in the HITRUST CSF [?]

  • Assessment domain: 12 Audit Logging & Monitoring
  • Control category: 09.0 – Communications and Operations Management
  • Control reference: 09.ab – Monitoring System Use

Specific to which parts of the overall AI system? [?]
AI application layer:
  • Application AI safety and security systems
AI platform layer
  • Model safety and security systems
Discussed in which authoritative AI security sources? [?]
  • Guidelines for Secure AI System Development
    Nov. 2023, Cybersecurity & Infrastructure Security Agency (CISA)
    • Where:
      • 3. Secure deployment > Protect your model continuously
      • 4. Secure operation and maintenance > Monitor your system’s behavior

  • Securing Machine Learning Algorithms
    2021, © European Union Agency for Cybersecurity (ENISA)
    • Where:
      • 4.1- Security Controls > Technical > Define and monitor indicators for proper functioning of the model

Discussed in which commercial AI security sources? [?]
  • Databricks AI Security Framework
    Sept. 2024, © Databricks
    • Where:
      • Control DASF 21: Monitor data and AI system from a single pane of glass
      • Control DASF 36: Set up monitoring alerts
      • Control DASF 55: Monitor audit logs

  • Google Secure AI Framework
    June 2023, © Google
    • Where:
      • Step 4. Apply the six core elements of the SAIF > Expand strong security foundations to the AI ecosystem > Prepare to store and track supply chain assets, code, and training data

  • HiddenLayer’s 2024 AI Threat Landscape Report
    2024, © HiddenLayer
    • Where:
      • Part 4: Predictions and recommendations > 6. Continuous monitoring and incident response > Bullet #1

  • Snowflake AI Security Framework
    2024, © Snowflake Inc.
    • Where:
      • Backdooring models (insider attacks) > Mitigations > Access control and monitoring
      • Model stealing > Mitigations > Access control measures
      • Model inversion > Mitigations > Bullet 7
      • Self-hosted OSS LLMs Security > Mitigations > Secure deployment and monitoring

Control functions against which AI security threats? [?]
Additional information
  • Q: When will this requirement included in an assessment? [?]
    • This requirement will always be added to HITRUST assessments which include the
      Security for AI systems regulatory factor.
    • No other assessment tailoring factors affect this requirement.

  • Q: Will this requirement be externally inheritable? [?] [?]
    • Yes, partially. This may be a responsibility shared between an AI application provider and their AI platform provider (if used), performed independently on separate layers/components of the overall AI system.
Monitoring for data, models, and configs for suspicious changes

HITRUST CSF requirement statement [?] (12.09abAISecSystem.5)

The organization performs monitoring, on at least a monthly basis, for suspicious manipulation of 
(1) AI-related datasets;
(2) AI models;
(3) AI-relevant code (e.g., code used to create, train, and/or deploy AI models, code of
language model tools such as agents and plugins); and
(4) AI-relevant configurations (e.g., metaprompts)
that might compromise the AI system's performance or security.

 

Evaluative elements in this requirement statement [?]
1. The organization performs monitoring, on at least a monthly basis, for suspicious
manipulation of AI datasets that might compromise the AI system's performance or security.
2. The organization performs monitoring, on at least a monthly basis, for suspicious 
manipulation of AI models that might compromise the AI system's performance or security.
3. The organization performs monitoring, on at least a monthly basis, for suspicious
manipulation of AI-relevant code (e.g., code used to create, train, and/or deploy AI models,
code of language model tools such as agents and plugins) that might compromise the AI
system's performance or security.
4. The organization performs monitoring, on at least a monthly basis, for suspicious
manipulation of AI-relevant configurations (e.g., metaprompts) that might compromise
the AI system's performance or security.
Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample based test where possible for each evaluative element. Example test(s):
    • For example, review the AI system to ensure the organization performs monitoring, on at least a monthly basis, for suspicious manipulation of AI-related datasets; AI models; AI-relevant code (e.g., code used to create, train, and/or deploy AI models, code of language model tools such as agents and plugins); and AI-relevant configurations (e.g., metaprompts) that might compromise the AI system’s performance or security.

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures indicate if the organization performs monitoring, on at least a monthly basis, for suspicious manipulation of AI-related datasets; AI models; AI-relevant code (e.g., code used to create, train, and/or deploy AI models, code of language model tools such as agents and plugins); and AI-relevant configurations (e.g., metaprompts) that might compromise the AI system’s performance or security.

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.

Placement of this requirement in the HITRUST CSF [?]

  • Assessment domain: 12 Audit Logging & Monitoring
  • Control category: 09.0 – Communications and Operations Management
  • Control reference: 09.ab – Monitoring System Use

Specific to which parts of the overall AI system? [?]
AI application layer:
  • AI plugins and agents
  • Application AI safety and security systems
  • The deployed AI application (Considered in the underlying HITRUST e1, i1, or r2 assessment)
AI platform layer
  • The AI platform and associated APIs (Considered in the underlying HITRUST e1, i1, or r2 assessment)
  • Model safety and security systems
  • The deployed AI model
  • Model engineering environment and model pipeline
  • AI datasets and data pipelines
Discussed in which authoritative AI security sources? [?]
Discussed in which commercial AI security sources? [?]
  • Databricks AI Security Framework
    Sept. 2024, © Databricks
    • Where:
      • DASF 14: Audit actions performed on datasets
      • DASF 23: Register, version, approve, promote, deploy and monitor models
      • DASF 55: Monitor audit logs

  • Snowflake AI Security Framework
    2024, © Snowflake Inc.
    • Where:
      • Backdooring models (insider attacks) > Mitigations > Data provenance and auditability

Control functions against which AI security threats? [?]
Additional information
  • Q: When will this requirement included in an assessment? [?]
    • This requirement will always be added to HITRUST assessments which include the
      Security for AI systems regulatory factor.
    • No other assessment tailoring factors affect this requirement.

  • Q: Will this requirement be externally inheritable? [?] [?]
    • Yes, partially. This may be a responsibility shared between an AI application provider and their AI platform provider (if used), performed independently on separate layers/components of the overall AI system.
TOPIC: Documenting and inventorying AI systems
Inventory deployed AI systems

HITRUST CSF requirement statement [?] (07.07aAISecOrganizational.3)

The organization maintains a documented inventory of its deployed AI systems 
which at minimum identifies the
(1) associated AI platforms used by the AI system (if any);
(2) AI model(s) used (with name and version);
(3) AI system owner,
(4) AI system sensitivity / risk categorization, and
(5) associated AI service provider(s) (if any).
This inventory is
(6) periodically (at least semiannually) reviewed and updated.

 

Evaluative elements in this requirement statement [?]
1. The organization maintains a documented inventory of its deployed AI systems which
at minimum identifies the associated AI platforms used by the AI system (if any).
2. The organization maintains a documented inventory of its deployed AI systems which
at minimum identifies the AI model(s) used (with name and version).
3. The organization maintains a documented inventory of its deployed AI systems which
at minimum identifies the AI system owner.
4. The organization maintains a documented inventory of its deployed AI systems which
at minimum identifies the AI system sensitivity / risk categorization.
5. The organization maintains a documented inventory of its deployed AI systems which
at minimum identifies the associated AI service provider(s) (if any).
6. The organization’s inventory of deployed AI systems is periodically (at least semiannually)
reviewed and updated.
Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample-based test where possible for each evaluative element. Example test(s):
    • For example, review the AI system to ensure the organization maintains a documented inventory of its deployed AI systems which at minimum identifies the associated AI platforms used by the AI system (if any); AI model(s) used (with name and version); AI system owner, AI system sensitivity / risk categorization, and associated AI service provider(s) (if any). Further, confirm that this inventory is periodically (at least annually) reviewed and updated.

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures indicate if the organization maintains a documented inventory of its deployed AI systems which at minimum identifies the associated AI platforms used by the AI system (if any); AI model(s) used (with name and version); AI system owner, AI system sensitivity / risk categorization, and associated AI service provider(s) (if any). Reviews, tests, or audits are completed by the organization to measure the effectiveness of the implemented controls and to confirm the inventory is periodically (at least annually) reviewed and updated.

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.

Placement of this requirement in the HITRUST CSF [?]

  • Assessment domain: 07 Vulnerability Management
  • Control category: 07.0 – Asset Management
  • Control reference: 07.a – Inventory of Assets

Specific to which parts of the overall AI system? [?]
AI application layer:
  • The deployed AI application (Considered in the underlying HITRUST e1, i1, or r2 assessment)
AI platform layer
  • The AI platform and associated APIs (Considered in the underlying HITRUST e1, i1, or r2 assessment)
  • The deployed AI model
Discussed in which authoritative AI security sources? [?]
Discussed in which commercial AI security sources? [?]
  • Databricks AI Security Framework
    Sept. 2024, © Databricks
    • Where:
      • DASF 18: Govern model assets
      • DASF 23: Register, version, approve, promote and deploy models

  • Google Secure AI Framework
    June 2023, © Google
    • Where:
      • Step 4. Apply the six core elements of the SAIF > Expand strong security foundations to the AI ecosystem > Prepare to store and track supply chain assets, code, and training data
      • Step 4. Apply the six core elements of the SAIF > Harmonize platform-level controls to ensure consistent security across the organization > Review usage of AI and lifecycle of AI-based apps
      • Step 4. Apply the six core elements of the SAIF > Contextualize AI system risks in surrounding business processes > Build an inventory of AI models and their risk profile based on the specific use cases and shared responsibility when leveraging third-party solutions and services

  • HiddenLayer’s 2024 AI Threat Landscape Report
    2024, © HiddenLayer
    • Where:
      • Part 4: Predictions and recommendations > 1. Discovery and asset management

Control functions against which AI security threats? [?]
Additional information
  • Q: When will this requirement included in an assessment? [?]
    • This requirement will always be added to HITRUST assessments which include the
      Security for AI systems regulatory factor.
    • No other assessment tailoring factors affect this requirement.

  • Q: Will this requirement be externally inheritable? [?] [?]
    • No (dual responsibility). The AI application provider and its AI service providers (if used) are responsible for independently performing this requirement outside of the AI system’s technology stack.
Maintain a catalog of trusted data sources for AI

HITRUST CSF requirement statement [?] (07.07aAISecOrganizational.4)

The organization maintains a catalog of trusted data sources for use in 
(1) training, testing, and validating AI models;
(2) fine-tuning AI models; and
(3) enhancing AI prompts via RAG, as applicable.
This catalog is
(4) periodically (at least semiannually) reviewed and updated.

 

Evaluative elements in this requirement statement [?]
1. The organization maintains a catalog of trusted data sources for use
in training, testing, and validating AI models, as applicable.
2. The organization maintains a catalog of trusted data sources for use
in fine-tuning AI models, as applicable.
3. The organization maintains a catalog of trusted data sources for use
in enhancing AI prompts via RAG, as applicable.
4. The organization's catalog of trusted data sources for use in AI is
periodically (at least semiannually) reviewed and updated.

in enhancing AI prompts via RAG, as applicable.

Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample based test where possible for each evaluative element. Example test(s):
    • For example, evidence that the organization maintains a catalog of trusted data sources for use in training, testing, and validating AI models; fine-tuning AI models; and enhancing AI prompts via RAG, as applicable.

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures indicate if the organization maintains a catalog of trusted data sources for use in training, testing, and validating AI models; tuning AI models; and enhancing AI prompts via RAG, as applicable.

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.

Placement of this requirement in the HITRUST CSF [?]

  • Assessment domain: 07 Vulnerability Management
  • Control category: 07.0 – Asset Management
  • Control reference: 07.a – Inventory of Assets

Specific to which parts of the overall AI system? [?]
AI application layer:
  • Prompt enhancement via RAG, and associated RAG data sources
AI platform layer
  • Model tuning and associated datasets
  • AI datasets and data pipelines
Discussed in which authoritative AI security sources? [?]
Discussed in which commercial AI security sources? [?]
  • Databricks AI Security Framework
    Sept. 2024, © Databricks
    • Where:
      • Control DASF 11: Capture and view data lineage
      • Control DASF 17: Track and reproduce the training data used for ML model training

Control functions against which AI security threats? [?]
Additional information
  • Q: When will this requirement included in an assessment? [?]
    • This requirement is included when the assessment’s in-scope AI system(s) leverage data-driven AI models (e.g., non-generative machine learning models, generative AI models).
    • The Security for AI systems regulatory factor must also be present in the assessment.

  • Q: Will this requirement be externally inheritable? [?] [?]
    • Yes, fully. This requirement may be the sole responsibility of the AI model creator. Or, depending on the AI system’s architecture, only evaluative elements that are the sole responsibility of the AI model creator apply.
AI data and data supply inventory

HITRUST CSF requirement statement [?] (07.07aAISecOrganizational.5)

The organization maintains a documented inventory of data used to
(1) train, test, and validate AI models;
(2) fine-tune AI models; and
(3) enhance AI prompts via RAG, as applicable.
At minimum, this inventory contains the data
(4) provenance and
(5) sensitivity level (e.g., protected, confidential, public).
This inventory is
(6) periodically (at least semiannually) reviewed and updated.

 

Evaluative elements in this requirement statement [?]
1. The organization maintains a documented inventory of data used to train, test, and
validate AI models, as applicable.
2. The organization maintains a documented inventory of data used to fine-tune AI models,
as applicable.
3. The organization maintains a documented inventory of data used to enhance AI 
prompts via RAG, as applicable.
4. The organization’s AI data inventory contains the data provenance.
5. The organization’s AI data inventory contains the data sensitivity level (e.g., 
protected, confidential, public).
6. The organization’s AI data inventory is periodically (at least semiannually) reviewed and
updated.
Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample based test where possible for each evaluative element. Example test(s):
    • For example, review the AI system to ensure the organization maintains a documented inventory of data used to train, test, and validate AI models; tune AI models; and enhance AI prompts via RAG, as applicable. Further, confirm this inventory contains the data provenance and sensitivity level (e.g., protected, confidential, public).

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures indicate if the organization maintains a documented inventory of data used to train, test, and validate AI models; tune AI models; and enhance AI prompts via RAG, as applicable. Reviews, tests, or audits are completed by the organization to measure the effectiveness of the implemented controls and to confirm the inventory contains the data provenance and sensitivity level (e.g., protected, confidential, public).

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.

Placement of this requirement in the HITRUST CSF [?]

  • Assessment domain: 07 Vulnerability Management
  • Control category: 07.0 – Asset Management
  • Control reference: 07.a – Inventory of Assets

Specific to which parts of the overall AI system? [?]
AI application layer:
  • Prompt enhancement via RAG) and associated RAG data sources
AI platform layer
  • Model tuning and associated datasets
  • AI datasets and data pipelines
Discussed in which authoritative AI security sources? [?]
  • Generative AI framework for HM Government
    2023, Central Digital and Data Office, UK Government
    • Where:
      • Using generative AI safely and responsibility > Ethics > Transparency and explainability > Practical recommendations > Bullet #2
      • Using generative AI safely and responsibility > Data protection and privacy > Accuracy > Practical recommendations > Bullet #3

Discussed in which commercial AI security sources? [?]
  • Databricks AI Security Framework
    Sept. 2024, © Databricks
    • Where:
      • Control DASF 11: Capture and view data lineage
      • Control DASF 17: Track and reproduce the training data used for ML model training

  • Google Secure AI Framework
    June 2023, © Google
    • Where:
      • Step 4. Apply the six core elements of the SAIF > Expand strong security foundations to the AI ecosystem > Prepare to store and track supply chain assets, code, and training data

  • Snowflake AI Security Framework
    2024, © Snowflake Inc.
    • Where:
      • Backdooring models (insider attacks) > Mitigations > Data provenance and auditability

Control functions against which AI security threats? [?]
Additional information
  • Q: When will this requirement included in an assessment? [?]
    • This requirement is included when the assessment’s in-scope AI system(s) leverage data-driven AI models (e.g., non-generative machine learning models, generative AI models).
    • The Security for AI systems regulatory factor must also be present in the assessment.

  • Q: Will this requirement be externally inheritable? [?] [?]
    • Yes, fully. This requirement may be the sole responsibility of the AI model creator. Or, depending on the AI system’s architecture, only evaluative elements that are the sole responsibility of the AI model creator apply.
Model card publication (for model builders)

HITRUST CSF requirement statement [?] (06.10hAISecSystem.4)

The organization publishes model cards for the AI models it produces, which (minimally) include the following elements: 
(1) model details;
(2) intended use;
(3) training details (e.g., data, methodology);
(4) evaluation details (e.g., methodology, metrics, outcomes); and
(5) usage caveats (such as potential bias, risks, or limitations) and associated recommendations.

 

Evaluative elements in this requirement statement [?]
1. The organization publishes model cards for the AI models it produces, which includes model details.
2. The organization publishes model cards for the AI models it produces, which includes intended use.
3. The organization publishes model cards for the AI models it produces, which includes training details (e.g., data, methodology).
4. The organization publishes model cards for the AI models it produces, which includes evaluation details (e.g., methodology, metrics, outcomes).
5. The organization publishes model cards for the AI models it produces, which includes caveats (such 
as potential bias, risks, or limitations) and associated recommendations.
Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample based test where possible for each evaluative element. Example test(s):
    • For example, review the internally produced AI models deployed in in-scope AI systems and confirm the organization published a model card for each which included the elements outlined in this requirement statement.

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures indicate percentage of the organization’s internally produced AI models that feature a published model card relative to the total number of the organization’s internally produced AI models. Reviews, tests, or audits are completed by the organization to measure the effectiveness of the implemented controls and to confirm that all internally produced AI models feature published model cards inclusive of the elements outlined in this requirement statement.

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.

Placement of this requirement in the HITRUST CSF [?]

  • Assessment domain: 06 Configuration Management
  • Control category: 10.0 – Information Systems Acquisition, Development, and Maintenance
  • Control reference: 10h- Control of Operational Software

Specific to which parts of the overall AI system? [?]
AI platform layer:
  • The deployed AI model

Discussed in which authoritative AI security sources? [?]
  • Generative AI framework for HM Government
    2023, Central Digital and Data Office, UK Government
    • Where:
      • Using generative AI safely and responsibly > Ethics > Transparency and explainability > Practical recommendations > Bullet #4

Control functions against which AI security threats? [?]
Additional information
  • Q: When will this requirement included in an assessment? [?]
    • This requirement will always be added to HITRUST assessments which include the
      Security for AI systems regulatory factor.
    • Also, this requirement is only applicable to machine learning-based AI models (i.e., generative and predictive AI).

  • Q: Will this requirement be externally inheritable? [?] [?]
    • No. However, this requirement is applicable only to model builders.

TOPIC: Filtering and sanitizing AI data, inputs, and outputs
Dataset sanitization

HITRUST CSF requirement statement [?] (07.10bAISecSystem.2)

Data for 
(1) AI model training,
(2) AI model fine-tuning, or
(3) prompt enhancement via RAG—if used—is checked prior to usage (e.g., using statistical methods, through manual inspection,
or through automated means) for suspicious unexpected values or patterns which could be adversarial or malicious in nature
(e.g., poisoned samples).
Identified anomalous entries are
(4) removed.

 

Evaluative elements in this requirement statement [?]
1. Data for AI model training is checked prior to usage (e.g., using statistical 
methods, through manual inspection, or through automated means) for suspicious
unexpected values or patterns which could be adversarial or malicious in nature
(e.g., poisoned samples).
2. Data for AI model fine-tuning is checked prior to usage (e.g., using statistical 
methods, through manual inspection, or through automated means) for suspicious
unexpected values or patterns which could be adversarial or malicious in nature
(e.g., poisoned samples).
3. Data for prompt enhancement for RAG (if used) is checked prior to usage 
(e.g., using statistical methods, through manual inspection, or through automated
means) for suspicious unexpected values or patterns which could be adversarial
or malicious in nature (e.g., poisoned samples).
4. Identified suspicious unexpected values or patterns are removed.
Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample based test where possible for each evaluative element. Example test(s):
    • For example, review the AI system to ensure data for AI model training, AI model tuning, or prompt enhancement via RAG if used, is checked prior to usage for anomalies such as unexpected values or patterns (e.g., using statistical methods, through manual inspection). Further, confirm that the identified anomalous entries are removed.

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures indicate if the data for AI model training, AI model tuning, or prompt enhancement via RAG if used, is checked prior to usage for anomalies such as unexpected values or patterns (e.g., using statistical methods, through manual inspection). Reviews, tests, or audits are completed by the organization to measure the effectiveness of the implemented controls and to confirm the identified anomalous entries are removed.

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.

Placement of this requirement in the HITRUST CSF [?]

  • Assessment domain: 07 Vulnerability Management
  • Control category: 10.0 – Information Systems Acquisition, Development, and Maintenance
  • Control reference: 10.b – Input Data Validation

Specific to which parts of the overall AI system? [?]
AI application layer:
  • Prompt enhancement via RAG, and associated RAG data sources
AI platform layer
  • AI datasets and data pipelines
Discussed in which authoritative AI security sources? [?]
  • OWASP 2023 Top 10 for LLM Applications
    Oct. 2023, © The OWASP Foundation
    • Where:
      • LLM03: Training data poisoning > Prevention and Mitigation Strategies > Bullet #5
      • LLM06: Sensitive information disclosure > Prevention and Mitigation Strategies > Bullet #1

  • Securing Machine Learning Algorithms
    2021, © European Union Agency for Cybersecurity (ENISA)
    • Where:
      • 4.1- Security Controls > Technical: Control all data used by the ML model > Use methods to clean the training dataset from suspicious samples

Discussed in which commercial AI security sources? [?]
  • Databricks AI Security Framework
    Sept. 2024, © Databricks
    • Where:
      • DASF 7: Enforce data quality checks on batch and streaming datasets
      • DASF 15: Explore datasets and identify problems

  • Snowflake AI Security Framework
    2024, © Snowflake Inc.
    • Where:
      • Adversarial samples > Mitigations > Input preprocessing
      • Model poisoning > Mitigations > Bullet 2
      • Training data poisoning > Mitigations > Dataset sanitization

Control functions against which AI security threats? [?]

Additional information
  • Q: When will this requirement included in an assessment? [?]
    • This requirement is included when the assessment’s in-scope AI system(s) leverage data-driven AI models (e.g., non-generative machine learning models, generative AI models).
    • The Security for AI systems regulatory factor must also be present in the assessment.

  • Q: Will this requirement be externally inheritable? [?] [?]
    • Yes, fully. This requirement may be the sole responsibility of the AI platform provider and/or model creator. Or, depending on the AI system’s architecture, only evaluative elements that are the sole responsibility of the AI platform provider and/or model creator apply.
Input filtering

HITRUST CSF requirement statement [?] (07.10bAISecSystem.1)

Before they are processed by the model, the AI system
(1) actively filters user inputs (regardless of modality or source, including attachments) for
suspicious and/or unexpected values or patterns which could be adversarial or malicious in nature.

 

Evaluative elements in this requirement statement [?]
1. Before they are processed by the model, the AI system actively filters user inputs (regardless of modality or source, 
including attachments) for suspicious and/or unexpected values or patterns which could be adversarial or malicious in nature.
Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample-based test where possible for each evaluative element. Example test(s):
    • For example, review the AI system configurations to ensure that user inputs (regardless of modality or source, including attachments) are actively filtered before they reach the model.

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures indicate if the organization checks user inputs (regardless of modality or source, including attachments) for anomalies and filters suspicious and/or unexpected values or patterns which could be adversarial or malicious in nature. Reviews, tests, or audits are completed by the organization to measure the effectiveness of the implemented controls and to confirm the identified anomalous entries are removed.

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.

Placement of this requirement in the HITRUST CSF [?]

  • Assessment domain: 07 Vulnerability Management
  • Control category: 10.0 – Information Systems Acquisition, Development, and Maintenance
  • Control reference: 10.b – Input Data Validation

Specific to which parts of the overall AI system? [?]
AI application layer:
  • Application AI safety and security systems
AI platform layer
  • Model safety and security systems
Discussed in which authoritative AI security sources? [?]
  • OWASP 2023 Top 10 for LLM Applications
    Oct. 2023, © The OWASP Foundation
    • Where:
      • LLM02: Insecure output handling > Prevention and Mitigation Strategies > Bullet #2
      • LLM04: Model denial of service > Prevention and Mitigation Strategies > Bullet #1
      • LLM06: Sensitive information disclosure > Prevention and Mitigation Strategies > Bullet #2
      • LLM07: Insecure plugin design > Prevention and Mitigation Strategies > Bullet #1
      • LLM07: Insecure plugin design > Prevention and Mitigation Strategies > Bullet #2
      • LLM10: Model theft > Prevention and Mitigation Strategies > Bullet #5

  • Guidelines for Secure AI System Development
    Nov. 2023, Cybersecurity & Infrastructure Security Agency (CISA)
    • Where:
      • 1. Secure design > Design your system for security as well as functionality and performance
      • 3. Secure deployment > Protect your model continuously

  • Securing Machine Learning Algorithms
    2021, © European Union Agency for Cybersecurity (ENISA)
    • Where:
      • 4.1- Security Controls > Specific ML: Implement tools to detect if a data point is an adversarial example or not

Discussed in which commercial AI security sources? [?]
  • Snowflake AI Security Framework
    2024, © Snowflake Inc.
    • Where:
      • Prompt injection > Mitigations > Prompt validation and filtering
      • Indirect prompt injection > Mitigations > Bullet 1
      • Sponge samples > Mitigations > Input validation and normalization
      • Fuzzing > Mitigations > Input validation
      • Model inversion > Mitigations > Bullet 3
      • Training data poisoning > Mitigations > Input validation

Control functions against which AI security threats? [?]
Additional information
  • Q: When will this requirement included in an assessment? [?]
    • This requirement will always be added to HITRUST assessments which include the
      Security for AI systems regulatory factor.
    • No other assessment tailoring factors affect this requirement.

  • Q: Will this requirement be externally inheritable? [?] [?]
    • Yes, partially. This responsibility may be shared between an AI platform provider (if used) and the AI application provider.
Output encoding

HITRUST CSF requirement statement [?] (07.10mAISecOrganizational.4)

The information system 
(1) applies output encoding to textual AI model output to prevent traditional injection attacks
(e.g., remote code execution) which can create a vulnerability when processed.

 

Evaluative elements in this requirement statement [?]
1. The information system applies output encoding to textual AI model output to prevent 
traditional injection attacks (e.g., remote code execution) which can create a vulnerability when processed.
Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample based test where possible for each evaluative element. Example test(s):
    • For example, obtain and examine evidence to confirm the information system applied output encoding on text based model output.

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures indicate the percentage of text based model output that is encoded. Reviews, tests, or audits are completed by the information system applies output encoding to textual AI model output to prevent traditional injection attacks (e.g., remote code execution) which can create a vulnerability when processed.

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.

Placement of this requirement in the HITRUST CSF [?]

  • Assessment domain: 07 Vulnerability Management
  • Control category: 10.0 – Information Systems Acquisition, Development, and Maintenance
  • Control reference: 10.m – Control of Technical Vulnerabilities

Specific to which parts of the overall AI system? [?]
  • AI application layer:
    • Application AI safety and security systems
  • AI platform layer
    • Model safety and security systems
Discussed in which authoritative AI security sources? [?]
Discussed in which commercial AI security sources? [?]
Control functions against which AI security threats? [?]
Additional information
  • Q: When will this requirement included in an assessment? [?]
    • This requirement is only included when the assessment’s in-scope AI system leverages a generative AI model.
    • The Security for AI systems regulatory factor must also be present in the assessment.

  • Q: Will this requirement be externally inheritable? [?] [?]
    • Yes, partially. This responsibility may be shared between an AI platform provider (if used) and the AI application provider.
Output filtering

HITRUST CSF requirement statement [?] (07.10eAISecSystem.1)

Unless specifically required, the AI system
(1) actively filters or otherwise prevents sensitive data (e.g., personal phone numbers) contained within
generative AI model outputs from being shown to end users of the AI system.

 

Evaluative elements in this requirement statement [?]
1. Unless specifically required, the AI system actively filters or otherwise prevents sensitive data (e.g., personal phone numbers) 
contained within generative AI model outputs from being shown to end users of the AI system.
Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample based test where possible for each evaluative element. Example test(s):
    • For example, obtain and examine evidence to confirm sensitive data in generative AI model outputs were actively filtered or otherwise prevented from being included in user-facing outputs.

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures indicate the percentage of generative AI model-produced sensitive data in the information system’s user-facing output. Reviews, tests, or audits are completed by the organization to measure the effectiveness of the implemented controls and to confirm sensitive data in the AI model output is actively censored.

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.

Placement of this requirement in the HITRUST CSF [?]

  • Assessment domain: 19 Data Protection & Privacy
  • Control category: 13.0 Privacy Practices
  • Control reference: 13.k Use and Disclosure

Specific to which parts of the overall AI system? [?]
  • AI application layer:
    • Application AI safety and security systems
  • AI platform layer
    • Model safety and security systems
Discussed in which authoritative AI security sources? [?]
Discussed in which commercial AI security sources? [?]
Control functions against which AI security threats? [?]
Additional information
  • Q: When will this requirement included in an assessment? [?]
    • This requirement is only included when the assessment’s in-scope AI system leverages a generative AI model.
    • The Security for AI systems regulatory factor must also be present in the assessment.

  • Q: Will this requirement be externally inheritable? [?] [?]
    • Yes, partially. This responsibility may be shared between an AI platform provider (if used) and the AI application provider.
TOPIC: Resilience of the AI system
Updating incident response for AI specifics

HITRUST CSF requirement statement [?] (15.11cAISecOrganizational.1)

The organization’s established security incident detection and response processes 
address the detection of and recovery from AI-specific threats (e.g., poisoning, evasion)
through
(1) updates to the organization’s security incident plans / playbooks;
(2) consideration of AI-specific threats in security incident tabletop exercises;
(3) recording the specifics of AI-specific security incidents that have occurred;
and incorporating
(4) logs and
(5) alerts
from deployed AI systems into the organization’s monitoring and security
incident detection tools.

 

Evaluative elements in this requirement statement [?]
1. The organization’s established security incident detection and response processes
address the detection of and recovery from AI-specific threats (e.g., poisoning, evasion)
through updates to the organization’s security incident plans / playbooks.
2. The organization’s established security incident detection and response processes 
address the detection of and recovery from AI-specific threats (e.g., poisoning, evasion)
through consideration of AI-specific threats in security incident tabletop exercises.
3. The organization’s established security incident detection and response processes 
address the detection of and recovery from AI-specific threats (e.g., poisoning, evasion)
through recording the specifics of AI-specific security incidents that have occurred.
4. The organization’s established security incident detection and response processes 
address the detection of and recovery from AI-specific threats (e.g., poisoning, evasion)
through and incorporating logs from deployed AI systems into the organization’s
monitoring and security incident detection tools.
5. The organization’s established security incident detection and response processes 
address the detection of and recovery from AI-specific threats (e.g., poisoning, evasion)
through and incorporating alerts from deployed AI systems into the organization’s
monitoring and security incident detection tools.
Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample-based test where possible for each evaluative element. Example test(s):
    • For example, review the AI system to ensure the organization’s established security incident detection and response processes address the detection of and recovery from AI-specific threats (e.g., poisoning, evasion) through updates to the organization’s security incident plans / playbooks; consideration of AI-specific threats in security incident tabletop exercises; recording the specifics of AI-specific security incidents that have occurred. Further, confirm monitoring and security incident detection tools incorporate the logs and alerts from deployed AI systems.

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures indicate if the organization’s established security incident detection and response processes address the detection of and recovery from AI-specific threats (e.g., poisoning, evasion) through updates to the organization’s security incident plans / playbooks; consideration of AI-specific threats in security incident tabletop exercises; recording the specifics of AI-specific security incidents that have occurred. Reviews, tests, or audits are completed by the organization to measure the effectiveness of the implemented controls and to confirm that monitoring and security incident detection tools incorporate the logs and alerts from deployed AI systems.

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.

Placement of this requirement in the HITRUST CSF [?]

  • Assessment domain: 15 Incident Management
  • Control category: 11.0 – Information Security Incident Management
  • Control reference: 11.c- Responsibilities and Procedures

Specific to which parts of the overall AI system? [?]
  • N/A, not AI component-specific
Discussed in which authoritative AI security sources? [?]
  • LLM AI Cybersecurity & Governance Checklist
    Feb. 2024, © The OWASP Foundation
    • Where:
      • 3. Checklist > 3.1. Adversarial risk > Bullet #3
      • 3. Checklist > 3.9. Using or implementing large language model solutions > Bullet #13

  • Securing Machine Learning Algorithms
    2021, © European Union Agency for Cybersecurity (ENISA)
    • Where:
      • 4.1- Security Controls > Organizational: Include ML applications into detection and response to security incident processes

Discussed in which commercial AI security sources? [?]
  • Databricks AI Security Framework
    Sept. 2024, © Databricks
    • Where:
      • Control DASF 39: Platform security – Incident response team

  • Google Secure AI Framework
    June 2023, © Google
    • Where:
      • Step 4. Apply the six core elements of the SAIF > Extend detection and response to bring AI into an organization’s threat universe > Prepare to respond to attacks against AI and also to issues raised by AI output
      • Step 4. Apply the six core elements of the SAIF > Extend detection and response to bring AI into an organization’s threat universe > Adjust your abuse policy and incident response processes to AI-specific incident types such as malicious content creation or AI privacy violoations

  • HiddenLayer’s 2024 AI Threat Landscape Report
    2024, © HiddenLayer
    • Where:
      • Part 4: Predictions and recommendations > 6. Continuous monitoring and incident response > Bullet 2

  • Snowflake AI Security Framework
    2024, © Snowflake Inc.
    • Where:
      • Backdooring models (insider attacks) > Mitigations > Adversary detection and response

Control functions against which AI security threats? [?]
Additional information
  • Q: When will this requirement included in an assessment? [?]
    • This requirement will always be added to HITRUST assessments which include the
      Security for AI systems regulatory factor.
    • No other assessment tailoring factors affect this requirement.

  • Q: Will this requirement be externally inheritable? [?] [?]
    • No (dual responsibility). The AI application provider and its AI service providers (if used) are responsible for independently performing this requirement outside of the AI system’s technology stack.
Backing up AI system assets

HITRUST CSF requirement statement [?] (16.09lAISecOrganizational.1)

Backup copies of the following AI assets are created: 
(1) training, test, and validation datasets;
(2) code used to create, train, and/or deploy AI models;
(3) fine-tuning data;
(4) models;
(5) language model tools (e.g., plugins, agents);
(6) AI system configurations (e.g., metaprompts); and
(7) prompt enhancement data for RAG, as applicable.
These backups are
(8) managed in accordance with the organization’s policies addressing backups of data and
software.

 

Evaluative elements in this requirement statement [?]
1. Backup copies of AI training, test, and validation datasets are created.
2. Backup copies of code used to create, train, and/or deploy AI models is created.
3. Backup copies of AI fine-tuning data is created, if applicable.
4. Backup copies of AI models are created.
5. Backup copies of language model tools (e.g., plugins, agents) are created, if applicable.
6. Backup copies of AI system configurations (e.g., metaprompts) are created.
7. Backup copies of prompt enhancement data used for RAG are created, if applicable.
8. These backups are managed in accordance with the organization’s policies addressing
backups of data and software.
Illustrative procedures for use during assessments [?]

  • Policy: Examine policies related to each evaluative element within the requirement statement. Validate the existence of a written or undocumented policy as defined in the HITRUST scoring rubric.

  • Procedure: Examine evidence that written or undocumented procedures exist as defined in the HITRUST scoring rubric. Determine if the procedures and address the operational aspects of how to perform each evaluative element within the requirement statement.

  • Implemented: Examine evidence that all evaluative elements within the requirement statement have been implemented as defined in the HITRUST scoring rubric, using a sample based test where possible for each evaluative element. Example test(s):
    • For example, review the AI system to ensure backup copies of AI assets are created and include training, test, and validation datasets, code used to create, train, and/or deploy AI models, tuning data, models, language model tools (e.g., plugins, agents), AI system configurations (e.g., metaprompts), prompt enhancement data for RAG. Further, confirm the backups are managed in accordance with the organization’s policies addressing backups of data and software.

  • Measured: Examine measurements that formally evaluate and communicate the operation and/or performance of each evaluative element within the requirement statement. Determine the percentage of evaluative elements addressed by the organization’s operational and/or independent measure(s) or metric(s) as defined in the HITRUST scoring rubric. Determine if the measurements include independent and/or operational measure(s) or metric(s) as defined in the HITRUST scoring rubric. Example test(s):
    • For example, measures indicate if backup copies of AI assets are created and include training, test, and validation datasets, code used to create, train, and/or deploy AI models, tuning data, models, language model tools (e.g., plugins, agents), AI system configurations (e.g., metaprompts), prompt enhancement data for RAG. Reviews, tests, or audits are completed by the organization to measure the effectiveness of the implemented controls and to confirm that backups are managed in accordance with the organization’s policies addressing backups of data and software.

  • Managed: Examine evidence that a written or undocumented risk treatment process exists, as defined in the HITRUST scoring rubric. Determine the frequency that the risk treatment process was applied to issues identified for each evaluative element within the requirement statement.

Placement of this requirement in the HITRUST CSF [?]

  • Assessment domain: 16 – Business Continuity & Disaster Recovery
  • Control category: 09.0 – Communications and Operations Management
  • Control reference: 09.l – Back-up

Specific to which parts of the overall AI system? [?]
  • AI application layer:
    • AI plugins and agents
    • Prompt enhancement via RAG, and associated RAG data sources
    • Application AI safety and security systems
    • The deployed AI application (Considered in the underlying HITRUST e1, i1, or r2 assessment)
    • The AI application’s supporting IT infrastructure (Considered in the underlying HITRUST e1, i1, or r2 assessment)
  • AI platform layer
    • The AI platform and associated APIs (Considered in the underlying HITRUST e1, i1, or r2 assessment)
    • Model safety and security systems
    • Model tuning and associated datasets
    • The deployed AI model
    • Model engineering environment and model pipeline
    • AI datasets and data pipelines
    • AI compute infrastructure (Considered in the underlying HITRUST e1, i1, or r2 assessment)

Discussed in which authoritative AI security sources? [?]
  • Securing Machine Learning Algorithms
    2021, © European Union Agency for Cybersecurity (ENISA)
    • Where:
      • 4.1- Security Controls > Organizational: Integrate ML applications into the overall cyber-resilience strategy

Control functions against which AI security threats? [?]
Additional information
  • Q: When will this requirement included in an assessment? [?]
    • This requirement will always be added to HITRUST assessments which include the
      Security for AI systems regulatory factor.

  • Q: Will this requirement be externally inheritable? [?] [?]
    • Yes, partially. This may be a responsibility shared between an AI application provider and their AI platform provider (if used), performed independently on separate layers/components of the overall AI system.
AI security threats considered

For a more information about how HITRUST incorporates threats into the HITRUST Approach, see Appendix 9 of our Risk Management Handbook.

No security control selection and rationalization effort can be performed without first considering the security risk and threat landscape. The HITRUST CSF requirement statements considered in the HITRUST AI Security Certification have been mapped to the following AI security-focused security threats.

AI Security Threat Applies to predictive AI models? Applies to Rule-based models? Applies to generative AI models? Mitigated before AI system deployment? Mitigated after AI system deployment?
Availability attacks
Denial of AI service Yes Yes Yes   Yes
Input-based attacks
Prompt injection     Yes   Yes
Evasion Yes Yes     Yes
Model inversion Yes       Yes
Model extraction and theft Yes Yes   Yes Yes
Poisoning attacks
Data poisoning Yes   Yes Yes  
Model poisoning Yes Yes Yes Yes  
Supply chain attacks
Compromised 3rd-party training datasets Yes   Yes Yes  
Compromised 3rd-party models or code Yes Yes Yes Yes  
Threats inherent to current-state language models
Confabulation     Yes   Yes
Sensitive information disclosure from model     Yes   Yes
Excessive agency     Yes   Yes
Harmful code generation     Yes   Yes

 

To understand the AI security risk and threat landscape specifically, HITRUST harmonized AI security-specific threats discussed in the following authoritative and commercial sources. The sources listed below that have been harmonized into the HITRUST CSF as of v11.4.0 are indicated. HITRUST may harmonize more of these sources in future versions of the HITRUST CSF at our discretion and based on your feedback.

No. Source and Link Published by Date or Version Harmonized into the HITRUST CSF as of v11.4.0?
From the Open Worldwide Application Security Project (OWASP)
1 OWASP Machine Learning Top 10 Open Worldwide Application Security Project (OWASP) v0.3 Yes
2 OWASP Top 10 for LLM Applications Open Worldwide Application Security Project (OWASP) v1.1.0 Yes
3 OWASP AI Exchange Open Worldwide Application Security Project (OWASP) As of Q3 2024 (living document) Yes
From the European Union Agency for Cybersecurity (ENISA)
4 Securing Machine Learning Algorithms European Union Agency for Cybersecurity (ENISA) 2021 No
5 Cybersecurity of AI and Standardization European Union Agency for Cybersecurity (ENISA) March 2023 No
6 Multilayer Framework for Good Cybersecurity Practices for AI European Union Agency for Cybersecurity (ENISA) June 2023 No
From the National Institute of Standards and Technology (NIST)
7 NIST AI 600-1:Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile National Institute of Standards and Technology (NIST) 2024 No
8 NIST AI 100-2:E2023: Adversarial Machine Learning: Taxonomy of Attacks and Mitigations National Institute of Standards and Technology (NIST) Jan. 2023 No
From commercial entities
9 The anecdotes AI GRC Toolkit Anecdotes A.I Ltd. 2024 No
10 Databricks AI Security Framework Databricks Version 1.1, Sept. 2024 No
11 Failure Modes in Machine Learning Microsoft Nov. 2022 No
12 HiddenLayer’s 2024 AI Threat Landscape Report HiddenLayer 2024 No
13 IBM Watsonx AI Risk Atlas IBM As of Aug. 2024 (living document) No
14 The StackAware AI Security Reference StackAware As of Aug. 2024 (living document) No
15 Snowflake AI Security Framework Snowflake Inc. 2024 No
From others
16 Mitigating AI Risk: Safety and Security Guidelines for Critical Infrastructure Owners and Operators US Department of Homeland Security April 2024 No
17 MITRE ATLAS (mitigations) The MITRE Corporation As of Q3 2024 (living document) Yes
18 Attacking Artificial Intelligence Harvard Kennedy School Aug. 2019 No
19 Engaging with Artificial Intelligence Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC) Jan. 2024 No
20 CSA Large Language Model (LLM) Threats Taxonomy Cloud Security Alliance (CSA) June 2024 No
21 Securing Artificial Intelligence (SAI); AI Threat Ontology European Telecommunications Standards Institute (ETSI) 2022 No
22 ISO/IEC TR 24028:2020: Information technology — Artificial intelligence — Overview of trustworthiness in artificial intelligence International Standards Organization (ISO)/International Electrotechnical Commission (IEC) 2020 No

 

Relevant to this analysis: Because these documents were created to satisfy different needs and audiences, some discussed threats that did not apply to the scope of the HITRUST AI Security Certification effort. Namely, we removed from consideration threats that:

  1. did not relate to AI security for deployed systems, or

  2. applied to users of AI systems generally (and not to the deployers of AI systems), as these will be addressed through the addition of new AI usage requirements in version 12 of the HITRUST CSF slated for release in H2 2025.

The goal of analyzing these sources was not to ensure 100% coverage the AI threats discussed. Instead, comparing these sources against one another helped us:

  • Understand the AI security threat landscape, attack surface, and threat actors, as well as the applicability of various AI security threats to different AI deployment scenarios and model types
  • Minimize any subjectivity or personal bias we brought with us into the effort regarding these topics
  • Identify (by omission, minimal coverage, or direct discussion) the AI security threats which are generally not considered high risk or high impact to deployed AI systems
  • Identify (by consensus and heavy discussion) the AI security threats which are generally considered high risk or high impact to deployed AI systems
  • Identify the mitigations commonly recommended for identified AI security threats

Other key inputs into our understanding of the AI security threat landscape included:

  • Interviews with the authors of several of the documents listed above, as well as other cybersecurity leaders, on season 2 of HITRUST’s “Trust Vs.” podcast. These recordings are available here as well as podcast directories such as Apple Podcasts and YouTube Music.
  • Contributions from HITRUST’s AI Assurance Working Group, described in this press release. HITRUST is grateful to the members of this working group.
TOPIC: Availability attacks

This topic includes the following:

Denial of AI service

Description: Attacks aiming to disrupt the availability or functionality of the AI system by overwhelming it with a flood of requests for the purpose of degrading or shutting down the service. Several techniques can be employed here, including instructing the model to perform a time-consuming and/or computationally expensive task prior to answering the request (i.e., a sponge attack).

Impact: Impacts the availability over the overall AI system by rendering it inaccessible to legitimate users through performance degradation and/or system outage.

Applies to which types of AI models? Any

Which AI security requirements function against this threat? [?]
Discussed in which authoritative sources? [?]
Discussed in which commercial sources? [?]
TOPIC: Input-based attacks
Evasion (including adversarial examples)

Description:

  • Evasion attacks consist of exploiting the imperfection of a trained model. For instance, spammers and hackers often attempt to evade detection by obfuscating the content of spam emails and malware. Samples are modified to evade detection; that is, to be classified as legitimate. This does not involve influence over the training data. A clear example of evasion is image-based spam in which the spam content is embedded within an attached image to evade textual analysis by anti-spam filters. (Source: Wikipedia )
  • Evasion attacks attempt fool the AI model through inputs designed to mislead it into performing its task incorrectly.

Impact: Affects the integrity of model outputs, decisions, or behaviors.

Applies to which types of AI models? Predictive (non-generative) machine learning models as well as rule-based / heuristic AI models.

 

Which AI security requirements function against this threat? [?]
Discussed in which authoritative sources? [?]
Discussed in which commercial sources? [?]
Model extraction and theft

Description:

  • Model extraction aims to extract model architecture and parameters. (Source: NIST AI 100-2 Glossary)
  • Adversaries may extract a functional copy of a private model. (Source: MITRE ATLAS )

Impact:

  • Seeks to breach the confidentiality of the model itself.
  • Model extraction can lead to model stealing, which corresponds to extracting a sufficient amount of data from the model to enable the complete reconstruction of the model. (Source: Wikipedia)
  • Adversaries may exfiltrate model artifacts and parameters to steal intellectual property and cause economic harm to the victim organization. (Source: MITRE ATLAS )

Applies to which types of AI models? Predictive (non-generative) machine learning models as well as rule-based / heuristic AI models.

 

Which AI security requirements function against this threat? [?]
Discussed in which authoritative sources? [?]
Discussed in which commercial sources? [?]
  • AI Risk Atlas
    2024, © IBM Corporation
  • Databricks AI Security Framework
    Sept. 2024, © Databricks
    • Where:
      • Risks in AI System Components > Model 7.2: Model assets leak
      • Risks in AI System Components > Model management 8.2: Model theft
      • Risks in AI System Components > Model serving – Inference requests 9.6: Discover ML model ontology
      • Risks in AI System Components > Model serving – Inference response 10.3: Discover ML model ontology
      • Risks in AI System Components > Model serving – Inference response 10.3: Discover ML model family

  • Failure Modes in Machine Learning
    Nov. 2022, © Microsoft
    • Where: Intentionally-Motivated Failures > Model stealing

  • HiddenLayer’s 2024 AI Threat Landscape Report
    2024, © HiddenLayer
    • Where:
      • Part 2: Risks faced by AI-based systems > Model evasion > Inference attacks
      • Part 2: Risks faced by AI-based systems > Model theft

  • Snowflake AI Security Framework
    2024, © Snowflake Inc.
    • Where: Model stealing
Model inversion

Description:

  • A class of attacks that seeks to reconstruct class representatives from the training data of an AI model, which results in the generation of semantically similar data rather than direct reconstruction of the data (i.e., extraction). (Source: NIST AI 100-2, section 2.4.1)
  • Machine learning models’ training data could be reconstructed by exploiting the confidence scores that are available via an inference API. By querying the inference API strategically, adversaries can back out potentially private information embedded within the training data. (Source: MITRE ATLAS )
  • Model inversion (or data reconstruction) occurs when an attacker reconstructs a part of the training set by intensive experimentation during which the input is optimized to maximize indications of confidence level in the output of the model. (Source: OWASP AI Exchange )

Impact:

  • Can lead to a confidentiality breach of sensitive and/or confidential model training data. Depending on the model, this training data may include personally identifiable information, or other protected data.

Applies to which types of AI models? Predictive (non-generative) machine learning models

 

Which AI security requirements function against this threat? [?]
Discussed in which authoritative sources? [?]
Discussed in which commercial sources? [?]
Prompt injection

Description:

  • When an adversary crafts malicious user prompts as generative AI inputs that cause the AI system to act in unintended ways. These “prompt injections” are often designed to cause the model to bypass its original instructions and follow the adversary’s instructions instead.

Impact:

  • The impact of a successful prompt injection attack can vary greatly, depending on the context. Some prompt injection attacks attempt to cause the system to disclose confidential and/or sensitive information. For example, prompt extraction attacks aim to divulge the system prompt or other information in an LLMs context that would nominally be hidden from a user.

Applies to which types of AI models? Generative AI specifically

 

Which AI security requirements function against this threat? [?]
Discussed in which authoritative sources? [?]
Discussed in which commercial sources? [?]
Additional information
  • The AI Prompt Injection Attacks portion of the StackAware AI Security Reference discusses several prompt injection techniques and attacker goals in detail.
TOPIC: Poisoning attacks

This topic includes the following:

Data poisoning

Description: Poisoning attacks in which a part of the training data is under the control of the adversary. Source: NIST AI 100-2 Glossary

Impact: Affects the integrity of model outputs, decisions, or behaviors.

Applies to which types of AI models? Data-driven models (e.g., predictive ML models, generative AI models)

Which AI security requirements function against this threat? [?]
Discussed in which authoritative sources? [?]
Discussed in which commercial sources? [?]
Model poisoning

Description: Model poisoning attacks attempt to directly modify the trained AI model to inject malicious functionality into the model. Once trained, a model is often just a file residing on a server. Attackers can alter the model file or replace it entirely with a poisoned model file. In this respect, even if a model has been correctly trained with a dataset that has been thoroughly vetted, this model can still be replaced with a poisoned model at various points in the AI SDLC or in the runtime environment.

Impact: Affects the integrity of model outputs, decisions, or behaviors.

Applies to which types of AI models? Any

 

Which AI security requirements function against this threat? [?]
Discussed in which authoritative sources? [?]
Discussed in which commercial sources? [?]
TOPIC: Supply chain attacks
Compromised 3rd-party models or code

Description:

  • Attacks that take advantage of compromised or vulnerable ML software packages and third-party pre-trained models used for fine tuning, plugins or extensions, including outdated or deprecated models or components.

Impact:

  • Use of models or AI software packages poisoned upstream in the AI supply chain can lead to integrity issues such as biased outcomes, confidentially issues such as redirecting AI system outputs or loss of API keys, or even availability issues like outage of the AI system. The impact depends heavily on the context of the overall AI system.

Applies to which types of AI models? Any

 

Which AI security requirements function against this threat? [?]
Discussed in which authoritative sources? [?]
Discussed in which commercial sources? [?]
Compromised 3rd-party training datasets

Description:

  • Adversaries may poison training data and publish it to a public location. The poisoned dataset may be a novel dataset or a poisoned variant of an existing open-source dataset. This data may be introduced to a victim system via supply chain compromise.
    Source: MITRE ATLAS

Impact:

  • Use of poisoned datasets compromised upstream in the AI supply chain can lead to integrity issues such as biased outcomes or even availability issues like outage of the AI system. The impact depends heavily on the context of the overall AI system.

Applies to which types of AI models? Data-driven models (e.g., predictive ML models, generative AI models)

Which AI security requirements function against this threat? [?]
Discussed in which authoritative sources? [?]
Discussed in which commercial sources? [?]
TOPIC: Threats inherent to language models
Confabulation

Description:

  • The production of confidently stated but incorrect content by which users or developers may be misled or deceived. Colloquially as AI “hallucinations” or “fabrications”.

Impact:

  • Inaccurate output (an integrity issue), the impact of which varies greatly depending on the context. The issue is exacerbated through overreliance on the AI system.

Applies to which types of AI models? Generative AI specifically

 

Which AI security requirements function against this threat? [?]
Discussed in which authoritative sources? [?]
Discussed in which commercial sources? [?]
Additional information
  • HITRUST is intentionally focusing on the threat of LLM confabulation (which is almost always undesired) instead of hallucination (which is often a feature—not a bug—of stochastic systems).
    • See this document further discussing the difference between these related by distinct concepts in the context of generative AI.
    • This distinction is also addressed in NIST AI 600-1 which states, “Some commenters have noted that the terms hallucination and fabrication anthropomorphize GAI, which itself is a risk related to GAI systems as it can inappropriately attribute human characteristics to non-human entities.”
Excessive agency

Description:

  • Generative AI systems may undertake actions outside of the developer intent, organizational policy, and/or legislative, regulatory, and contractual requirements, leading to unintended consequences. This issue is facilitated by excessive permissions, excessive functionality, excessive autonomy, poorly defined operational parameters or granting the AI system the ability to make decisions or act without human intervention or oversight.

Impact:

  • Heavily dependent on which systems the overall AI system is connected to and can interact with (e.g., messaging systems, file servers, command prompts). Can lead to confidentiality, availability, or integrity issues.

Applies to which types of AI models? Generative AI specifically

 

Which AI security requirements function against this threat? [?]
Discussed in which authoritative sources? [?]
Discussed in which commercial sources? [?]
Additional information
  • See this post for an overview of the difference between autonomy and agency, paraphrased as follows:
    • Autonomy, in the context of technology, generally refers to the ability to perform tasks without human intervention. The expansion of autonomy could indeed set the stage for the emergence of agency. When a system gains the capability to perform a network of tasks (constituting a decision situation) autonomously, it could be seen as a foundation upon which agency might build.
    • Agency implies a higher order of function—not just carrying out tasks, but also making choices about which tasks to undertake and when.
Sensitive information disclosed in output

Description:

  • Without proper guardrails, generative AI outputs can contain confidential and/or sensitive information included in the model’s training dataset, RAG data sources, or data residing in data sources that the AI system is connected to (e.g., through language model tools such as agents or plugins). Examples of such information include that which is covered under data protection laws and regulations (e.g., personally identifiable information, protected health information, cardholder data) and corporate secrets.

Impact:

  • Can lead to a confidentiality breach of sensitive/confidential data included in the model’s training dataset, RAG data sources, or data residing in data sources that the AI system is connected to (e.g., through language model tools such as agents or plugins).
  • Can also lead to a confidentiality breach of the system’s metaprompt, which can be an extremely valuable piece intellectual property. Discovering the metaprompt can inform adversaries about the internal workings of the AI application and overall AI system.

Applies to which types of AI models? Generative AI specifically

 

Which AI security requirements function against this threat? [?]
Discussed in which authoritative sources? [?]
Discussed in which commercial sources? [?]
Harmful code generation

Description:

  • Without proper guardrails, generative AI models might generate code that causes harm or unintentionally affects other systems (e.g., via SQLi) or the end user (e.g., via XSS).

Impact:

  • Varied based on the nature of the harmful code generated.

Applies to which types of AI models? Generative AI specifically

 

Which AI security requirements function against this threat? [?]
Discussed in which authoritative sources? [?]
Discussed in which commercial sources? [?]
Crosswalks to other sources of AI guidance

The pages dedicated to each AI security requirement in this specification include detailed crosswalks to various AI authoritative sources.

Additionally, HITRUST has prepared crosswalks to and commentary on the following additional AI sources:

ISO/IEC 23894:2023

ISO/IEC 23894:2023 provides guidance on AI risk management. It is not a security-focused standard, but its guidance slightly overlaps with a small number of HITRUST CSF requirements included in the HITRUST AI Security Assessment and Certification given that security is a key area of risk to an AI system.

HITRUST offers an AI Risk Management Assessment and Insights Report which directly addresses the guidance provided by both ISO/IEC 23894:2023 and the NIST AI Risk Management Framework. Please see this video and this page for more information.

For the benefit of organizations utilizing the HITRUST AI Security Assessment and Certification and ISO/IEC 23894:2023, HITRUST has prepared the following crosswalk. Note that many mappings are labeled as “subset”, as most of the mapped HITRUST CSF requirements focus exclusively on AI security while the accompanying ISO/IEC 23894:2023 guidance should address the entirety of AI risk (not just security).

ISO/IEC 42001:2023

Check out this episode of HITRUST’s Trust Vs. podcast for more dialog on how organizations can leverage ISO/IEC 42001:2023 alongside an AI security framework.

ISO/IEC 42001:2023 is a management system standard from ISO and IEC. Its stated purpose is to “provide guidance for establishing, implementing, maintaining and continually improving an AI (artificial intelligence) management system within the context of an organization.” The scope of ISO/IEC 42001:2023 is broad. For example, it contains guidance such as “determine whether climate change is a relevant issue” and understanding the “competitive landscape and trends for new products and services using AI systems.”

Because the scope of an organization’s AI management system needs to consider many, many more AI risks than just cybersecurity, ISO/IEC 42001:2023 intentionally does not go very deep into AI cybersecurity. Instead of going deep into any single area (cybersecurity or otherwise), it instead aims to “provides guidelines for the deployment of applicable controls” while “avoid[ing] specific guidance on management processes.”

ISO/IEC 42001:2023’s introduction explains that “the organization can combine generally accepted frameworks, other international standards, and its own experience to implement crucial processes.” Meaning, ISO/IEC 42001:2023 is designed to be compatible with and implemented alongside more prescriptive frameworks such as the HITRUST CSF. By working alongside the organization’s AI management system, this HITRUST AI cybersecurity assessment and certification helps organizations nail AI security—and to prove that they have done so in a reliable and consistent way.

To assist adopters of both ISO/IEC 42001:2023 and the HITRUST AI Security Certification certification, mappings between the two documents’ content have been captured where possible. These mappings are shown in the following pages of this document. Note that the mapped HITRUST AI cybersecurity requirements support but may not fully cover the mapped ISO/IEC 42001:2023 expectation given the different purposes of these documents explained above.

Crosswalk

HITRUST AI security requirement (title) Mapping(s) to ISO/IEC 42001:2023
AI security threat management
Security evaluations such as AI red teaming
  • 8. Operation > Operational planning and control > Paragraph 3
AI legal and compliance
ID and evaluate compliance & legal obligations for AI system development and deployment
  • 4. Context of the organization > 4.1. Understanding the organization and its context > Note 2, bullet a, sub-bullet 1
AI security governance and oversight
Assign roles and responsibilities for AI
  • 4. Governance implications of the organizational use of AI > 4.3. Maintaining accountability when introducing AI
  • 5. Overview of AI and AI systems > 5.5. Constraints on the use of AI
  • 6. Policies to address the use of AI >
    • 6.2. Governance oversight of AI
    • 6.3. Governance of decision-making
  • 6. Policies to address the use of AI > 6.7. Risk > 6.7.3. Objectives
Augment written policies to address AI specificities
  • 4. Governance implications of the organizational use of AI > 4.3. Maintaining accountability when introducing AI
  • 5. Overview of AI and AI systems > 5.2. How AI systems differ from other information technologies > 5.2.3. Adaptive systems
  • 6. Policies to address the use of AI >
    • 6.2. Governance oversight of AI
    • 6.4. Governance of data use
    • 6.7.2. Risk management
Development of AI software
Provide AI security training to AI builders and AI deployers
  • 7. Support >
    • 7.2. Competence
    • 7.3. Awareness
Change control over AI models
  • 8. Operation > Operational planning and control > Paragraph 5
  • Annex A > A.6. AI system life cycle >
    • A.6.2.2. AI system requirements and specification
    • A.6.2.3. Documentation of AI system design and development
    • A.6.2.4. AI system verification and validation
    • A.6.2.5. AI system deployment
Change control over language model tools
  • 8. Operation > Operational planning and control > Paragraph 5
  • Annex A > A.6. AI system life cycle >
    • A.6.2.2. AI system requirements and specification
    • A.6.2.3. Documentation of AI system design and development
    • A.6.2.4. AI system verification and validation
    • A.6.2.5. AI system deployment
Documentation of AI specifics during system design and development
  • Annex A > A.4. Resources for AI systems > A.4. Resources for AI systems (all items)
AI supply chain
AI security requirements communicated to AI providers
  • Annex A > A.10. Third-party and customer relationships >
    • A.10.2. Allocating responsibilities
    • A.10.3. Suppliers
AI system logging and monitoring
Log AI system inputs and outputs
  • Annex A > A.6. AI system life cycle > A.6.2.8. AI system recording of event logs
Documenting and inventorying AI systems
AI data and data supply inventory
  • Annex A > A.7. Data for AI systems >
    • A.7.3. Acquisition of data
    • A.7.5. Data provenance
Resilience of the AI system
Updating incident response for AI specifics
  • Annex A > A.8. Information for interested parties of AI systems > A.8.4. Communication of incidents
AI updates to HITRUST’s glossary

This section lists several AI-relevant terms and acronyms used throughout this document that will be added to the HITRUST’s Glossary of Terms and Acronyms alongside the release of v11.4.0 of the HITRUST CSF.

Many of the definitions and citations for AI terms to be included in the upcoming update to HITRUST’s glossary have been sourced from the excellent NIST Trustworthy & Responsible AI Resource Center Glossary.

These glossary additions contain material from HITRUST and other authoritative sources and may be subject to multiple copyrights. The source is included within each term definition and is clearly attributed. Source definitions are used verbatim, unless noted otherwise; grammar and usage may vary. Some definitions have been altered slightly to make them generally applicable, such as language removed from a NIST definition that is particular to the U.S. Government or otherwise modified to accommodate the HITRUST AI Assurance Program. Such definitions are indicated by the word “adapted” after the source abbreviation. Definitions obtained from a discussion of the term, rather than a glossary, or developed from a similar term (or multiple terms) in a glossary, are indicated by the word “derived” after the source abbreviation.

Added terms
Term Definition Source Author(s) and/or Editor(s)
adversarial examples Modified testing samples which induce misclassification of a machine learning model at deployment time NIST AI 100-2e2023: Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations Vassilev, Apostol;Oprea, Alina;Fordyce, Alie;Anderson, Hyrum
AI agent Entity that senses and responds to its environment and takes actions to achieve its goals ISO/IEC 22989:2022: Information technology — Artificial intelligence — Artificial intelligence concepts and terminology International Standards Organization (ISO)/International Electrotechnical Commission (IEC)
AI application An AI application is a software system that utilizes an artificial intelligence or machine learning model as a core component in order to automate complex tasks. These tasks might require language understanding, reasoning, problem-solving, or perception to automate an IT helpdesk, a financial assistant, or health insurance questions, for example. AI models alone may not be directly beneficial to end-users for many tasks. But, they can be used as a powerful engine to produce compelling product experiences. In such an AI-powered application, end-users interact with an interface that passes information to the model. Robust Intelligence Robust Intelligence, Inc.
AI assurance A combination of frameworks, policies, processes and controls that measure, evaluate and promote safe, reliable and trustworthy AI. AI assurance schemes may include conformity, impact and risk assessments, AI audits, certifications, testing and evaluation, and compliance with relevant standards. IAPP Key Terms for AI Governance International Association of Privacy Professionals (IAPP)
AI component Functional element that constructs an AI system ISO/IEC 22989:2022: Information technology — Artificial intelligence — Artificial intelligence concepts and terminology International Standards Organization (ISO)/International Electrotechnical Commission (IEC)
AI governance A system of laws, policies, frameworks, practices and processes at international, national and organizational levels. AI governance helps various stakeholders implement, manage, oversee and regulate the development, deployment and use of AI technology. It also helps manage associated risks to ensure AI aligns with stakeholders’ objectives, is developed and used responsibly and ethically, and complies with applicable legal and regulatory requirements. IAPP Key Terms for AI Governance International Association of Privacy Professionals (IAPP)
AI platform An integrated collection of technologies to develop, train, and run machine learning models. This typically includes automation capabilities, machine learning operations (MLOps), predictive data analytics, and more. Think of it like a workbench–it lays out all of the tools you have to work with and provides a stable foundation on which to build and refine. Redhat.com Red Hat, Inc.
AI red teaming Red teaming is a way of interactively testing AI models to protect against harmful behavior, including leaks of sensitive data and generated content that’s toxic, biased, or factually inaccurate. IBM Research Blog: What is GenAI Red Teaming? Martineau, Kim
AI system Engineered system that generates outputs such as content, forecasts, recommendations or decisions for a given set of human-defined objectives. The engineered system can use various techniques and approaches related to artificial intelligence to develop a model to represent data, knowledge, processes, etc. which can be used to conduct tasks. ISO/IEC 22989:2022: Information technology — Artificial intelligence — Artificial intelligence concepts and terminology International Standards Organization (ISO)/International Electrotechnical Commission (IEC)
algorithm A set of computational rules to be followed to solve a mathematical problem. More recently, the term has been adopted to refer to a process to be followed, often by a computer. Comptroller’s Handbook: Model Risk Management, Version 1.0 Office of the Comptroller of the Currency (OCC)
API rate limiting API rate limiting refers to controlling or managing how many requests or calls an API consumer can make to your API. You may have experienced something related as a consumer with errors about “too many connections” or something similar when you are visiting a website or using an app. An API owner will include a limit on the number of requests or amount of total data a client can consume. This limit is described as an API rate limit. An example of an API rate limit could be the total number of API calls per month or a set metric of calls or requests during another period of time. Axway Blog: What is an API rate limit? Defranchi, Lydia
bias Favoritism towards some things, people, or groups over others ISO/IEC TR 24028:2020: Information technology — Artificial intelligence — Overview of trustworthiness in artificial intelligence International Standards Organization (ISO)/International Electrotechnical Commission (IEC)
confabulation (context: AI) A false, degraded, or corrupted memory, is a stable pattern of activation in an artificial neural network or neural assembly that does not correspond to any previously learned patterns. The same term is also applied to the (non-artificial) neural mistake-making process leading to a false memory (confabulation). Wikipedia: Confabulation (neural networks)  
data catalog A Data Catalog is a collection of metadata, combined with data management and search tools, that helps analysts and other data users to find the data that they need, serves as an inventory of available data, and provides information to evaluate fitness of data for intended uses. Alation Blog: What Is a Data Catalog? – Importance, Benefits & Features Wells, Dave
data provenance A process that tracks and logs the history and origin of records in a dataset, encompassing the entire life cycle from its creation and collection to its transformation to its current state. It includes information about sources, processes, actors and methods used to ensure data integrity and quality. Data provenance is essential for data transparency and governance, and it promotes better understanding of the data and eventually the entire AI system. IAPP Key Terms for AI Governance International Association of Privacy Professionals (IAPP)
data science Methodology for the synthesis of useful knowledge directly from data through a process of discovery or of hypothesis formulation and hypothesis testing. NIST Big Data Interoperability Framework Chang, Wo L.;Grady, Nancy
data scientist A practitioner who has sufficient knowledge in the overlapping regimes of business needs, domain knowledge, analytical skills, and software and systems engineering to manage the end-to-end data processes in the analytics life cycle. NIST Big Data Interoperability Framework Chang, Wo L.;Grady, Nancy
data poisoning Poisoning attacks in which a part of the training data is under the control of the adversary NIST AI 100-2e2023: Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations Vassilev, Apostol;Oprea, Alina;Fordyce, Alie;Anderson, Hyrum
differential privacy Differential privacy is a method for measuring how much information the output of a computation reveals about an individual. It is based on the randomized injection of “noise”. Noise is a random alteration of data in a dataset so that values such as direct or indirect identifiers of individuals are harder to reveal. An important aspect of differential privacy is the concept of “epsilon” or ɛ, which determines the level of added noise. Epsilon is also known as the “privacy budget” or “privacy parameter”. DRAFT Anonymisation, pseudonymisation and privacy enhancing technologies guidance: Chapter 5: Privacy-enhancing technologies (PETs) Information Commissioner’s Office (UK Government)
embedding An embedding is a representation of a topological object, manifold, graph, field, etc. in a certain space in such a way that its connectivity or algebraic properties are preserved. For example, a field embedding preserves the algebraic structure of plus and times, an embedding of a topological space preserves open sets, and a graph embedding preserves connectivity. One space X is embedded in another space Y when the properties of Y restricted to X are the same as the properties of X. Wolfram MathWorld  
ensemble A machine learning paradigm where multiple models (often called “weak learners”) are trained to solve the same problem and combined to get better results. The main hypothesis is that when weak models are correctly combined we can obtain more accurate and/or robust models. Towards Data Science: Ensemble methods: bagging, boosting and stacking Rocca, Joseph
evasion Evasion attacks consist of exploiting the imperfection of a trained model. For instance, spammers and hackers often attempt to evade detection by obfuscating the content of spam emails and malware. Samples are modified to evade detection; that is, to be classified as legitimate. This does not involve influence over the training data. A clear example of evasion is image-based spam in which the spam content is embedded within an attached image to evade textual analysis by anti-spam filters. Wikipedia: Adversarial Machine Learning > Evasion  
expert system A computer system emulating the decision-making ability of a human expert through the use of reasoning, leveraging an encoding of domain-specific knowledge most commonly represented by sets of if-then rules rather than procedural code. The term “expert system” was used largely during the 1970s and ’80s amidst great enthusiasm about the power and promise of rule-based systems that relied on a “knowledge base” of domain-specific rules and rule-chaining procedures that map observations to conclusions or recommendations. National Security Commission on Artificial Intelligence: The Final Report National Security Commission on Artificial Intelligence (NSCAI)
fine-tuning Refers to the process of adapting a pre-trained model to perform specific tasks or to specialize in a particular domain. This phase follows the initial pre-training phase and involves training the model further on task-specific data. NIST AI 100-2e2023: Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations Vassilev, Apostol;Oprea, Alina;Fordyce, Alie;Anderson, Hyrum
foundation model A large language model that is trained on a broad set of diverse data to operate across a wide range of use cases. OWASP Top 10 for LLM Applications: Glossary The OWASP Foundation
generative AI A field of AI that uses deep learning trained on large datasets to create content, such as written text, code, images, music, simulations and videos, in response to user prompts. Unlike discriminative models, generative AI makes predictions on existing data rather than new data. IAPP Key Terms for AI Governance International Association of Privacy Professionals (IAPP)
guardrail (context: AI) An AI guardrail is a safeguard that is put in place to prevent artificial intelligence from causing harm. AI guardrails are a lot like highway guardrails – they are both created to keep people safe and guide positive outcomes. Techopedia Explains: AI Guardrail Techopedia
graphical processing unit (GPU) A specialized chip capable of highly parallel processing. GPUs are well-suited for running machine learning and deep learning algorithms. GPUs were first developed for efficient parallel processing of arrays of values used in computer graphics. Modern-day GPUs are designed to be optimized for machine learning. National Security Commission on Artificial Intelligence: The Final Report National Security Commission on Artificial Intelligence (NSCAI)
ground truth Information provided by direct observation as opposed to information provided by inference Collins Dictionary: ‘Ground truth’ HarperCollins Publishers
grounding The practice of ensuring that generative AI tools return results that are accurate (‘grounded’ in facts) rather than just those which are statistically probable or pleasing to a user. The Causeit Guide to Digital Fluency: Concept: Grounding (in AI) Causeit, Inc.
hallucination (context: AI) A response generated by AI which contains false or misleading information presented as fact. This term draws a loose analogy with human psychology, where hallucination typically involves false percepts. However, there is a key difference: AI hallucination is associated with erroneous responses or beliefs rather than perceptual experiences. For example, a chatbot powered by large language models (LLMs) may embed plausible-sounding random falsehoods within its generated content. Some researchers believe the specific term “AI hallucination” unreasonably anthropomorphizes computers. (Adapted) Hallucination (artificial intelligence)  
heuristic AI See rule-based AI    
hyperparameters Characteristic of a machine learning algorithm that affects its learning process. Hyperparameters are selected prior to training and can be used in processes to help estimate model parameters. Examples of hyperparameters include the number of network layers, width of each layer, type of activation function, optimization method, learning rate for neural networks; the choice of kernel function in a support vector machine; number of leaves or depth of a tree; the K for K-means clustering; the maximum number of iterations of the expectation maximization algorithm; the number of Gaussians in a Gaussian mixture. ISO/IEC 22989:2022: Information technology — Artificial intelligence — Artificial intelligence concepts and terminology International Standards Organization (ISO)/International Electrotechnical Commission (IEC)
inference The stage of ML in which a model is applied to a task. For example, a classifier model produces the classification of a test sample. NIST IR 8269: A Taxonomy and Terminology of Adversarial Machine Learning (Draft) Tabassi, Elham;Kevin J. Burns; Michael Hadjimichael; Andres D. Molina-Markham; Julian T. Sexton
input Data received from an external source ISO/IEC/IEEE 24765:2017 — Systems and software engineering — Vocabulary International Standards Organization (ISO)/International Electrotechnical Commission (IEC)/Institute of Electrical and Electronics Engineers (IEEE)
large language model (LLM) A type of artificial intelligence (AI) that is trained on a massive dataset of text and code. LLMs used natural language processing to process requests and generate data. OWASP Top 10 for LLM Applications: Glossary The OWASP Foundation
language model A language model is a probabilistic model of a natural language. In 1980, the first significant statistical language model was proposed, and during the decade IBM performed ‘Shannon-style’ experiments, in which potential sources for language modeling improvement were identified by observing and analyzing the performance of human subjects in predicting or correcting text. Language models are useful for a variety of tasks, including speech recognition (helping prevent predictions of low-probability (e.g. nonsense) sequences), machine translation, natural language generation (generating more human-like text), optical character recognition, handwriting recognition, grammar induction, and information retrieval. Wikipedia: Language model  
LLM agent A piece of code that formulates prompts to an LLM and parses the output in order to perform an action or a series of action (typically by calling one or more plugins/tools). OWASP Top 10 for LLM Applications: Glossary The OWASP Foundation
LLM tool A piece of code that exposes external functionality to an LLM Agent; e.g., reading a file, fetching the contest of a URL, querying a database, etc. OWASP Top 10 for LLM Applications: Glossary The OWASP Foundation
LLM plugin Similar to LLM Tool but more often used in the context of chatbots (e.g., ChatGPT) OWASP Top 10 for LLM Applications: Glossary The OWASP Foundation
machine learning (ML) A branch of Artificial Intelligence (AI) that focuses on the development of systems capable of learning from data to perform a task without being explicitly programmed to perform that task. Learning refers to the process of optimizing model parameters through computational techniques such that the model’s behavior is optimized for the training task. EU-U.S. Terminology and Taxonomy for Artificial Intelligence – Second Edition EU-US Trade and Technology Council (TTC) Working Group 1 (WG1)
metaprompt The metaprompt or system message is included at the beginning of the prompt and is used to prime the model with context, instructions, or other information relevant to your use case. You can use the system message to describe the assistant’s personality, define what the model should and shouldn’t answer, and define the format of model responses. Microsoft Artificial Intelligence and Machine Learning Blog: Creating effective security guardrails with metaprompt/system message engineering Young, Sarah
minimization (Part of the ICO framework for auditing AI) AI systems generally require large amounts of data. However, organizations must comply with the minimization principle under data protection law if using personal data. This means ensuring that any personal data is adequate, relevant and limited to what is necessary for the purposes for which it is processed. […] The default approach of data scientists in designing and building AI systems will not necessarily take into account any data minimization constraints. Organizations must therefore have in place risk management practices to ensure that data minimization requirements, and all relevant minimization techniques, are fully considered from the design phase, or, if AI systems are bought or operated by third parties, as part of the procurement process due diligence A guide to ICO Audit: Artificial Intelligence (AI) Audits Information Commissioner’s Office (UK Government)
modality In the context of human–computer interaction, a modality is the classification of a single independent channel of input/output between a computer and a human. Such channels may differ based on sensory nature (e.g., visual vs. auditory), or other significant differences in processing (e.g., text vs. image). Wikipedia: Modality (human–computer interaction)  
model A core component of an AI system used to make inferences from inputs in order to produce outputs. A model characterizes an input-to-output transformation intended to perform a core computational task of the AI system (e.g., classifying an image, predicting the next word for a sequence, or selecting a robot’s next action given its state and goals). EU-U.S. Terminology and Taxonomy for Artificial Intelligence – Second Edition EU-US Trade and Technology Council (TTC) Working Group 1 (WG1)
model card A brief document that discloses information about an AI model, like explanations about intended use, performance metrics and benchmarked evaluation in various conditions, such as across different cultures, demographics or race. IAPP Key Terms for AI Governance International Association of Privacy Professionals (IAPP)
model extraction Type of privacy attack to extract model architecture and parameters NIST AI 100-2e2023: Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations Vassilev, Apostol;Oprea, Alina;Fordyce, Alie;Anderson, Hyrum
model inversion A class of attacks that seeks to reconstruct class representatives from the training data of an AI model, which results in the generation of semantically similar data rather than direct reconstruction of the data (i.e., extraction). NIST AI 100-2e2023: Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations Vassilev, Apostol;Oprea, Alina;Fordyce, Alie;Anderson, Hyrum
model poisoning Poisoning attacks in which the model parameters are under the control of the adversary NIST AI 100-2e2023: Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations Vassilev, Apostol;Oprea, Alina;Fordyce, Alie;Anderson, Hyrum
model training Process to determine or to improve the parameters of a machine learning model, based on a machine learning algorithm, by using training data ISO/IEC 22989:2022: Information technology — Artificial intelligence — Artificial intelligence concepts and terminology International Standards Organization (ISO)/International Electrotechnical Commission (IEC)
model validation Confirmation, through the provision of objective evidence, that the requirements for a specific intended use or application have been fulfilled ISO/IEC 27043:2015: Information technology — Security techniques — Incident investigation principles and processes International Standards Organization (ISO)/International Electrotechnical Commission (IEC)
model zoo A Model Zoo is a repository or library that contains pre-trained models for various machine learning tasks. Model Zoos are often provided by machine learning platforms or communities, such as TensorFlow’s Model Garden, PyTorch’s Torchvision, and Hugging Face’s Transformers. Saturncloud.io glossary Saturncloud.io
open-source AI An AI system that makes its components available under licenses that individually grant the freedoms to: Study how the system works and inspect its components, use the system for any purpose and without having to ask for permission, modify the system to change its recommendations, predictions or decisions to adapt to your needs, and share the system with or without modifications for any purpose. These freedoms apply both to a fully functional system and to discrete elements of a system. A precondition to exercising these freedoms is to have access to the preferred form to make modifications to the system. Opensource.org: The Open Source AI Definition open source initiative
output Process by which an information processing system, or any of its parts, transfers data outside of that system or part ISO/IEC/IEEE 24765:2017 — Systems and software engineering — Vocabulary International Standards Organization (ISO)/International Electrotechnical Commission (IEC)/Institute of Electrical and Electronics Engineers (IEEE)
parameter Internal variable of a model that affects how it computes its outputs ISO/IEC 22989:2022: Information technology — Artificial intelligence — Artificial intelligence concepts and terminology International Standards Organization (ISO)/International Electrotechnical Commission (IEC)
prediction (context: AI) Primary output of an AI system when provided with input data or information. Predictions can be followed by additional outputs, such as recommendations, decisions and actions. Prediction does not necessarily refer to predicting something in the future. Predictions can refer to various kinds of data analysis or production applied to new data or historical data (including translating text, creating synthetic images or diagnosing a previous power failure). ISO/IEC 22989:2022: Information technology — Artificial intelligence — Artificial intelligence concepts and terminology International Standards Organization (ISO)/International Electrotechnical Commission (IEC)
predictive AI Artificial intelligence systems that utilize statistical analysis and machine learning algorithms to make predictions about potential future outcomes, causation, risk exposure, and more. Systems of this kind have been applied across numerous industries. For example:
• Healthcare: leveraging patient data to diagnose diseases and model disease progression
• Finance: predicting movements of markets and analyzing transaction data to detect fraud
• Retail and e-commerce: examining sales data, seasonality, and non-financial factors to optimize pricing strategies or forecast consumer demand
• Insurance: streamlining claims management or forecasting potential losses to ensure adequate reserves are maintained
Predictive AI Carnegie Council for Ethics in International Affairs
prompt A prompt is natural language text describing the task that an AI should perform: a prompt for a text-to-text language model can be a query such as “what is Fermat’s little theorem?”, a command such as “write a poem about leaves falling”, or a longer statement including context, instructions, and conversation history. Wikipedia: Prompt engineering  
prompt extraction An attack in which the objective is to divulge the system prompt or other information in an LLMs context that would nominally be
hidden from a user
NIST AI 100-2e2023: Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations Vassilev, Apostol;Oprea, Alina;Fordyce, Alie;Anderson, Hyrum
prompt injection Attacker technique in which a hacker enters a text prompt into an LLM or chatbot designed to enable the user to perform unintended
or unauthorized actions
NIST AI 100-2e2023: Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations Vassilev, Apostol;Oprea, Alina;Fordyce, Alie;Anderson, Hyrum
rate limiting A limit on how often a client can call a service within a defined window of time. When the limit is exceeded, the client—rather than receiving an application-related response—receives a notification that the allowed rate has been exceeded as well as additional data regarding the limit number and the time at which the limit counter will be reset for the requestor to resume receiving responses. NIST Special Publication 800-204: Security Strategies for Microservices-based Application Systems Chandramouli, Ramaswamy
randomized smoothing A method that transforms any classifier into a certifiable robust smooth classifier by producing the most likely predictions under Gaussian noise perturbations. This method results in provable robustness for ℓ2 evasion attacks, even for classifiers trained on large-scale datasets, such as ImageNet. Randomized smoothing typically provides certified prediction to a subset of testing samples (the exact number depends on the radius of the ℓ2 ball and the characteristics of the training data and model). Recent results have extended the notion of certified adversarial robustness to ℓ2-norm bounded perturbations by combining a pretrained denoising diffusion probabilistic model and a standard high-accuracy classifier [50]. NIST AI 100-2e2023: Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations Vassilev, Apostol;Oprea, Alina;Fordyce, Alie;Anderson, Hyrum
red-team A group of people authorized and organized to emulate a potential adversary’s attack or exploitation capabilities against an enterprise’s security posture. The Red Team’s objective is to improve enterprise cybersecurity by demonstrating the impacts of successful attacks and by demonstrating what works for the defenders (i.e., the Blue Team) in an operational environment. Also known as Cyber Red Team. Information Technology Laboratory Computer Security Resource Center (CSRC) Glossary National Institute of Standards and Technology (NIST)
responsible AI An AI system that aligns development and behavior to goals and values. This includes developing and fielding AI technology in a manner that is consistent with democratic values. National Security Commission on Artificial Intelligence: The Final Report National Security Commission on Artificial Intelligence (NSCAI)
retrieval augmented generation (RAG) RAG is an AI framework for retrieving facts from an external knowledge base to ground large language models (LLMs) on the most accurate, up-to-date information and to give users insight into LLMs’ generative process. IBM Research Blog: Retrieval Augmented Generation Martineau, Kim
robust AI An AI system that is resilient in real-world settings, such as an object-recognition application that is robust to significant changes in lighting. The phrase also refers to resilience when it comes to adversarial attacks on AI components. National Security Commission on Artificial Intelligence: The Final Report National Security Commission on Artificial Intelligence (NSCAI)
robustness The ability of a machine learning model/algorithm to maintain correct and reliable performance under different conditions (e.g., unseen, noisy, or adversarially manipulated data). NIST IR 8269: A Taxonomy and Terminology of Adversarial Machine Learning Tabassi, Elham;Kevin J. Burns; Michael Hadjimichael; Andres D. Molina-Markham; Julian T. Sexton
rule-based AI Rule-based systems are a basic type of AI model that uses a set of prewritten rules to make decisions and solve problems. Developers create rules based on human expert knowledge, which then enable the system to process input data and produce a result. To build a rule-based system, a developer first creates a list of rules and facts for the system. An inference engine then measures the information given against these rules. Here, human knowledge is encoded as rules in the form of if-then statements. The system follows the rules set and only performs the programmed functions. For example, a rule-based algorithm or platform could measure a bank customer’s personal and financial information against a programmed set of levels. If the numbers match, the bank grants the applicant a home loan. TechTarget.com Tip: Choosing between a rule-based vs. machine learning system Carew, Joseph M.;Foster, Emily;Wisbey, Olivia
sensitive data Data with potentially harmful effects in the event of disclosure or misuse ISO/IEC TR 24028:2020: Information technology — Artificial intelligence — Overview of trustworthiness in artificial intelligence International Standards Organization (ISO)/International Electrotechnical Commission (IEC)
small language model (SLM) A smaller version of their better-known and larger counterparts, large language models. Small is a reference to the size of the models. They have fewer parameters and require a much smaller training dataset, optimizing them for efficiency and better suiting them for deployment in environments with limited computational resources or for applications that require faster training and inference time. IAPP Key Terms for AI Governance International Association of Privacy Professionals (IAPP)
system prompt See metaprompt    
test data (context: AI) Test data is the data used to evaluate the performance of the AI system, before its deployment. ISO/IEC 22989:2022: Information technology — Artificial intelligence — Artificial intelligence concepts and terminology International Standards Organization (ISO)/International Electrotechnical Commission (IEC)
threat modeling Threat modeling is analyzing representations of a system to highlight concerns about security and privacy characteristics. At the highest levels, when we threat model, we ask four key questions: (1) What are we working on? (2) What can go wrong? (3) What are we going to do about it? (4) Did we do a good enough job? Threat Modeling Manifesto Braiterman, Zoe;Shostack, Adam;Marcil, Jonathan; de Vries, Stephen;Michlin, Irene;Wuyts, Kim; Hurlbut, Robert;Schoenfield, Brook SE;Scott, Fraser;Coles, matthew;Romeo, Chris;Miller, Alyssa;Tarandach, Izar;Douglen, Avi;French, Mark
training data Training data consists of data samples used to train a machine learning algorithm. Typically, the data samples relate to some particular topic of concern and they can consist of structured or unstructured data. The data samples can be unlabelled or labelled. ISO/IEC 22989:2022: Information technology — Artificial intelligence — Artificial intelligence concepts and terminology International Standards Organization (ISO)/International Electrotechnical Commission (IEC)
trustworthy AI Trustworthy AI has three components: (1) it should be lawful, ensuring compliance with all applicable laws and regulations (2) it should be ethical, demonstrating respect for, and ensure adherence to, ethical principles and values and (3) it should be robust, both from a technical and social perspective, since, even with good intentions, AI systems can cause unintentional harm. Characteristics of Trustworthy AI systems include: valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias managed. Trustworthy AI concerns not only the trustworthiness of the AI system itself but also comprises the trustworthiness of all processes and actors that are part of the AI system’s life cycle. Trustworthy AI is based on respect for human rights and democratic values. EU-U.S. Terminology and Taxonomy for Artificial Intelligence – Second Edition EU-US Trade and Technology Council (TTC) Working Group 1 (WG1)
validation (context: AI) In software assessment frameworks, validation is the process of checking whether certain requirements have been fulfilled. It is part of the evaluation process. In AI-specific context, the term “validation” is used to refer to the process of leveraging data to set certain values and properties relevant to the system design. It is not about assessing the system with respect to its requirements, and it occurs before the evaluation stage. ISO/IEC 22989:2022: Information technology — Artificial intelligence — Artificial intelligence concepts and terminology International Standards Organization (ISO)/International Electrotechnical Commission (IEC)
validation data Validation data corresponds to data used by the developer to make or validate some algorithmic choices (hyperparameter search, rule design, etc.). It has various names depending on the field of AI, for instance in natural language processing it is typically referred to as development data. ISO/IEC 22989:2022: Information technology — Artificial intelligence — Artificial intelligence concepts and terminology International Standards Organization (ISO)/International Electrotechnical Commission (IEC)
Added acronyms
AI
Artificial Intelligence
AIMS
Artificial Intelligence Management System (e.g., as described in ISO/IEC 42001:2023)
API
Application Programming Interface
ATLAS
Adversarial Threat Landscape for Artificial Intelligence Systems
GAI
Generative Artificial Intelligence
GPU
Graphics Processing Unit
LLM
Large Language Model
ML
Machine Learning
NSCAI
National Security Commission on Artificial Intelligence
RACI
Responsible, Accountable, Consulted, and Informed (in the context of a responsibility assignment matrix, as discussed in this proposed AI requirement statement)
RAI
Responsible AI
RAG
Retrieval Augmented Generation
SLM
Small Language Model
TAI
Trustworthy AI
Chat

Chat Now

This is where you can start a live chat with a member of our team