Part Two: The Quest for 1%
By Karen Proffitt, MHIIM, RHIA, CHP – Vice President, Industry Relations/CPO
In Part One, we discussed the ongoing challenges with patient identification and matching, and AHIMA’s recommendation to achieve a duplicate record rate of not more than 1%. In Part Two, we dig into specifics on the role database environments, duplicate error and creation rate calculations, workforce factors, and the types of patient matching algorithms play in doing so.
In its July 2020 white paper, “A Realistic Approach to Achieving a 1% Duplicate Record Error Rate,” AHIMA recommends a duplicate record error rate not to exceed 1%, while also acknowledging that reaching that goal is hindered by the industry-wide variability in patient identification and matching methods and processing. That is why the association emphasizes the need to “identify and understand how database environments, duplicate error and creation rate calculations, workforce factors, and the types of patient matching algorithms that play an important role in achieving and maintaining a low duplicate record error rate.”
Duplicate Error and Creation Rate Calculation: To accurately calculate duplicate record error rate for a single MPI database, AHIMA recommends dividing the total number of possible duplicate records by the total number of patient records in the MPI database. For example: in the case of 5,000 possible duplicate pairs (two records) that involve 10,000 individual records and a database containing 500,000 individual records, the calculation is: 10,000/500,000 = 2% duplicate error rate.
To calculate the duplicate record creation rate for a single MPI database, AHIMA recommends dividing the total number of confirmed duplicate records for a defined time period by the total number of registration events within the MPI during the same time period. For example: if 3,000 duplicate patient records were confirmed in the third quarter and there were 200,000 registration events within the time period, the calculation is: 3,000/200,000 = 1.5% creation rate.
Workforce Factors: According to AHIMA, errors can occur at any point in a patient’s journey through the healthcare system, so an organization’s entire workforce should undergo iterative training in patient identification and matching—especially anyone who uses the patient record so they can identify, prevent, and resolve errors.
Types of Patient Matching Algorithms: Most HIM systems use some form of duplicate detection algorithms to help identify possible duplicate medical records within their database. There are three main types:Deterministic uses a unique identifier, sometimes paired with nonunique identifiers (e.g. DOB) for additional validation, that are compared to identify exact matches. These are considered basic algorithms and usually makes comparisons based on name, DOB, SSNs and sometimes gender.
- Rules-based assigns each data element a “weight” for how essential it is to match a record. As such, as long as there are enough identical data elements, records are considered to be matches even if every data element does not match exactly.
- Probabilistic compares several nonunique field values between records and assign a weight to reflect how closely the two-field values match. Weights are then added across the fields to indicate the probability of an actual match. Probabilistic are considered intermediate or advanced algorithms.
Just Associates’ advanced algorithm powers its IDSentry™ solution, enabling it to identify more true duplicates than systems that rely on basic and even intermediate patient matching algorithms.
Organizational Goals Necessary to Achieve and Maintain Data Integrity: A roundtable of health information professionals convened by AHIMA identified several impact areas to address patient identity and matching. These areas were governance and leadership, data collection, and data integrity. The group also created an organizational goal checklist around these areas, which can be found here.
In Part Three, we look at the ICMMR Cycle approach recommended by AHIMA to achieve a 1% duplicate record error rate.