By Bimal Sheth, Vice President of Assurance Services
For those of you who joined the webinar on November 20th, Improving the Throughput and Transparency of the HITRUST Assurance Program, you heard about some exciting changes to the HITRUST CSF® Assurance Program. This blog post is the second part in a series of communications meant to increase transparency as we continuously work to improve our processes. If you didn’t attend the webinar, a replay is available here; you can also read the previous blog post in this series here. The feedback we received from the HITRUST® community over the past few weeks has been energizing and invaluable. Please continue to submit thoughts and questions to your Customer Success Managers, or email@example.com.
In this post I discuss the new scoring rubrics, provide updates on some of the changes introduced on the webinar, and provide some tips to help reduce the number of quality assurance (QA) comments you receive from HITRUST, shortening the time from submission of a validated assessment to the issuance of a HITRUST CSF Validated Report.
New Scoring Rubrics
As a reminder, the new scoring rubric is available and expected to be used for all validated assessments that are accepted after December 30th, 2019. Based upon feedback from the HITRUST community I wanted to highlight a few important areas of the rubric below; however, I strongly recommend all assessed entities, Internal Assessors, and External Assessors read the related white paper and listen to the webinar, entitled Updated Scoring Rubric and Updated PRISMA Model.
- Policy rubric –
- Strength – The new scoring rubric for Policy creates four tiers for ‘strength’ based upon the number of policy elements that are met. Policy elements are defined on the back of the rubric as: (i) demonstrably approved by management, (ii) demonstrably communicated to stakeholders in the organization and members of the workforce, and (iii) clearly communicates management’s expectations of the control(s) operation (e.g., using “shall”, “will”, or “must” statements).
- Coverage – The new scoring rubric for Policy defines ‘coverage’ based upon the percentage of HITRUST CSF® policy elements addressed. HITRUST CSF policy elements are found in the policy level’s illustrative procedure for each requirement statement. Reviewing the policy against the requirement statement alone is insufficient as the policy level’s illustrative procedure often describes additional CSF elements that are required to be evaluated.
- Procedure rubric –
- Similar to the Policy rubric; however, there are four procedural criteria defined on the back of the rubric against which procedure strength is evaluated.
- The previous rubric exempted automated controls from needing a written procedure. The new rubric eliminates this exemption. Under the new rubric, all controls, including automated controls, need written procedures in order to reach a score of fully compliant for the Procedure PRISMA level.
- Measured rubric –
- The previous rubric required both an operational and independent measure to achieve a score of partially compliant for the Measured PRISMA level; however, the new rubric can achieve this score through a single, independent measure.
- The new rubric clarifies the definition of a ‘measure’ on the back and outlines three requirements that all must be met in order to be classified as a measure.
- To achieve tier 3 or tier 4 strength, you must have a metric. A metric must meet all the requirements of a measure AND meet two additional metric-specific requirements described on the back of the scoring rubric.
- Sampling guidance –
- For populations that are less than 50 items, the new rubric recommends the use of professional judgement; however, it requires a minimum sample size of 3 items. This is different from sampling guidance for populations less than 50 items outlined in the old scoring rubric, which recommended a sample size of 5, but allowed for a sample size as small as 1.
- Automated controls – Clarified that a test of one is appropriate only if the following conditions are met:
- If it is a configurable control, the associated configuration must be tested, and
- The outcome/result of the configuration must be tested.