Article Text

This article has a correction. Please see:

Download PDFPDF

Checking it twice: an evaluation of checklists for detecting medication errors at the bedside using a chemotherapy model
  1. Rachel E White1,
  2. Patricia L Trbovich1,
  3. Anthony C Easty2,
  4. Pamela Savage3,
  5. Katherine Trip3,
  6. Sylvia Hyland4
  1. 1Healthcare Human Factors Group, Centre for Global eHealth Innovation, University Health Network, Toronto, Canada
  2. 2Healthcare Human Factors Group, Centre for Global eHealth Innovation, University Health Network, University of Toronto, Mount Sinai Hospital, Toronto, Canada
  3. 3Princess Margaret Hospital, University Health Network, Toronto, Canada
  4. 4Institute for Safe Medication Practices Canada, Toronto, Canada
  1. Correspondence to Ms Rachel E White, Healthcare Human Factors Group, Centre for Global eHealth Innovation, University Health Network, 4th Floor, R Fraser Elliott Building, 190 Elizabeth Street, Toronto, Ontario M5G 2C4, Canada; rachel.white{at}uhn.on.ca

Abstract

Objective To determine what components of a checklist contribute to effective detection of medication errors at the bedside.

Design High-fidelity simulation study of outpatient chemotherapy administration.

Setting Usability laboratory.

Participants Nurses from an outpatient chemotherapy unit, who used two different checklists to identify four categories of medication administration errors.

Main outcome measures Rates of specified types of errors related to medication administration.

Results As few as 0% and as many as 90% of each type of error were detected. Error detection varied as a function of error type and checklist used. Specific step-by-step instructions were more effective than abstract general reminders in helping nurses to detect errors. Adding a specific instruction to check the patient's identification improved error detection in this category by 65 percentage points. Matching the sequence of items on the checklist with nurses' workflow had a positive impact on the ease of use and efficiency of the checklist.

Conclusions Checklists designed with explicit step-by-step instructions are useful for detecting specific errors when a care provider is required to perform a long series of mechanistic tasks under a high cognitive load. Further research is needed to determine how best to assist clinicians in switching between mechanistic tasks and abstract clinical problem solving.

  • Medication error
  • medication safety
  • checklist
  • double check
  • error detection

This is an open-access article distributed under the terms of the Creative Commons Attribution Non-commercial License, which permits use, distribution, and reproduction in any medium, provided the original work is properly cited, the use is non commercial and is otherwise in compliance with the license. See: http://creativecommons.org/licenses/by-nc/2.0/ and http://creativecommons.org/licenses/by-nc/2.0/legalcode.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Adverse drug events have long been established as a significant cause of patient harm,1–4 and severe incidents with intravenous drug administration have been well documented.5–8 Thus, for certain medications, safety groups are increasingly recommending the use of independent double checks for detecting errors.9–11

Independent double checking is a process involving two individuals, in which the responsibility of the second individual is to verify the work performed by the first.12 The word ‘independent’ refers to having a second person follow a series of steps to arrive at a calculation or setting; those steps are performed with no prior knowledge of any previous calculation or setting. This approach is thought to reduce the possibility of ‘confirmation bias,’ which occurs when the person checking the medication is likely to see what they expect to see, even if an error has occurred.13

Although recommendations for carrying out independent double checks are common,9–11 little research has been done on this practice. One frequently cited study established an error detection rate of 93–97%14 but did not include details of how the checks were performed. However, safety organisations have made specific recommendations about how double checks should be performed. For example, some have recommended the second clinician have no knowledge of the calculation or setting before conducting the verification.9 ,15 Others have recommended the use of a checklist and a formalised process,12 but no examples or specific design recommendations have been provided. As such, it is not clear what elements of a double check yield the highest success in detecting errors.

Our hospital's outpatient chemotherapy unit recently introduced a new double checking process and checklist for the verification of ambulatory infusion pump (AIP) settings. Many studies have shown the effectiveness of checklists in improving patient safety,16–18 but a recent review found very little research on the effective design of checklists.19

In the study reported here, we used an experimental approach to examine what components of a double check contributed to effective detection of medication errors with the aim of generating evidence-based recommendations for designing supportive tools. Given the opportunity for study in our hospital and the risks associated with chemotherapy administration,20 the clinical model for this research was chemotherapy.

Methods

It was essential to begin this research with a solid understanding of the context in which error checking takes place, as well as the specific risks associated with administering chemotherapy by AIP. Hence, using the approach of contextual enquiry,21 ,22 we observed 13 registered nurses in the unit for 30 h. We then identified all failures that could occur and classified these into four categories (table 1).23

Table 1

Failure modes for chemotherapy administered via ambulatory infusion pumps

Checklist designs

Two checklists were compared in this study. One had been in use in the unit for several months. The new checklist was a revision of the old checklist and was based on our observations of how it was being used.

Old checklist

This checklist had been designed to ensure that the second nurse had no knowledge of the medication order before viewing the pump settings,9 ,15 and was meant to be conducted as follows (figure 1):

  • The first nurse checks the ‘five rights’ between the order and the drug label, and programmes the pump according to the label.

  • The second nurse copies all values directly from the pump screen to the checklist, and then verifies the checklist against the label.

  • The second nurse checks the drug label against the order to ensure the drug has been prepared as prescribed.

  • The second nurse returns the pump and the checklist to the first nurse, who checks the label against the patient's armband and administers the medication to the patient.

Figure 1

Checklist for independent double checking introduced in the chemotherapy nursing unit (old checklist).

During observations, we noticed the checklist was not routinely used as intended. Specifically, the second nurse tended to look cyclically at the label, order and pump for each item and then write its value onto the checklist. This approach meant that the nurse had knowledge of what to expect on the pump screens before looking at the pump, hence compromising the independence of the check.

We believed the reason for this behaviour was the placement of the items ‘total dose’ and ‘infusion duration’ at the beginning of the list. This information was only available on the drug label and order, not the pump. Thus, to complete the checklist from top to bottom, nurses had to look first at the order or label.

We also observed that despite instructions at the top of the form to check the label to the patient's armband, nurses did not routinely check patient identity.

New checklist

Based on our observations, we refined the checklist (figure 2). To address the issue of the independence, the checklist was rearranged so that the first six items of the checklist mirrored the exact sequence of the infusion pump prompts.

Figure 2

Redesigned checklist for performing double checks (new checklist).

To address the potential for administering a medication to the wrong patient, we embedded a specific item into the checklist, reminding the nurse to check the patient's identity from the armband against the drug label.

We were concerned that clinical decision errors would be missed with the existing checklist, since there were no reminders to check for this error type. We thus added a reminder of the ‘five rights’ of medication administration, along with a graphic stating ‘STOP! Knowing all you know, does this order make sense to you?’

Procedure

To compare the two checklists, we simulated error checking for intravenous chemotherapy in a usability laboratory with one-way mirrors and cameras (figure 3). Ten nurses from the unit were recruited to participate. Furniture, patient charts, interruptions and ambient noise were used to replicate their clinical environment and tasks.

Figure 3

Usability laboratory.

A nurse actor programmed the pumps and created a realistic teamwork environment, two actors played the roles of patients with cancer, and a mannequin was used as an additional patient (figure 4).

Figure 4

Nurse participant interacting with a patient actor as the confederate nurse interrupts in the simulated outpatient chemotherapy environment.

The focus of the experiment was on the second nurse's ability to detect errors using the checklists. They had to care for their own patients and check the other nurse's pumps. To create a sense of realism, they were regularly interrupted by the actors' unscripted conversations.

Each participant used the old and new checklists: half used the old checklist first. In total, each participant checked 14 pumps (seven with each double checking method) (table 2). Errors were also counterbalanced between participants and carefully matched between the two checklists.

Table 2

Errors used in laboratory experiment

Two observers collected data on the number and type of errors detected and time taken to complete each check.

Data analysis

Error-detection rates were analysed using a 2 (checklist type; old vs new)×4 (error type; pump programming vs mismatch vs patient ID vs clinical decision) repeated-measures analysis of variance (ANOVA) with an α level of 0.05. Differences between means for each of the four error types were assessed using post-hoc pairwise comparisons with the Bonferroni correction. Time to complete the checks was analysed using a one-way (checklist type; old vs new) repeated-measures ANOVA with an α of 0.05.

Results

Overall, the new checklist helped nurses to detect more errors of any type (55%; 71/130) than the old checklist (38%; 49/130) (F(1,9)=26.64, p<0.01).

Error detection

There was a significant interaction between checklist type and error type (F(3,27)=7.31, p<0.01) (figure 5).

Figure 5

Error-detection rates by error type and checklist type.

Errors in pump programming

There was no significant difference in detection of pump programming errors between the checklists (90% with the old checklist (51/60); and 80% with the new checklist (48/60)) (p>0.05). We expected that the change in the order of items on the checklist would improve the rate of error detection by eliminating confirmation bias, but this was not the case.

Errors in patient identification

Detection of identification errors with the new checklist (80%; 16/20) was significantly higher than with the old checklist (15%; 3/20) (p<0.01). Thus, the addition of the specific item on the checklist (ie, ‘check MRN (medical record number) and name from armband to medication label’) had a positive impact on error detection.

Mismatches between order and label

Overall, detection of mismatch errors was low, and there was no significant difference between the old checklist (45%; 9/20) and the new checklist (60%; 12/20) (p>0.05). Since no changes were made to this error detection task on the checklist, we were not surprised by this result.

Clinical decision errors

Neither checklist helped nurses to identify clinical errors (none of these was detected; 0/30). Thus, the addition of the general reminder to stop and think critically had no impact on error detection (p>0.05).

Efficiency

On average, it took nurses 2:16 (minutes:seconds) to complete a check with the old checklist and 1:55 with the new checklist. This 21 s improvement was not statistically significant (p>0.05). Thus, despite the addition of a step for verifying patient identity that required the nurse to travel to the bedside, there was no difference in efficiency between the two checklists. Further, nurses commented that the new checklist seemed easier to use.

Discussion

In this study we aimed to determine which checklist features impacted nurses' ability to detect different types of errors in a double checking process. We found a wide range in error-detection rates: as few as 0% and as many as 90% of errors were identified, depending on the type of error and the checklist used.

Specific reminders are effective but not failsafe

When the checklist included specific instructions detailing what to look for and where to look (such as the pump programming items on both checklists and the patient identification item on the new checklist), detection rates were much higher (80–90%) than when the instructions were less specific. For example, the items for detecting mismatches between the order and label simply instructed nurses to check the medication label against the original order but did not specify which data points to check; the detection rates for this type of error were quite low (45–60%). The addition of a general reminder to think critically did not help nurses detect any clinical errors. However, adding a specific reminder to check identification from the patient's armband to the drug label improved error detection by 65 percentage points. Thus, specific checklist items can help clinicians identify well-defined, specific errors. Although this is an encouraging finding, it is important to note that 10–20% of errors still went undetected when they were accompanied by specific instructions, highlighting that human checking processes are not failsafe.

Mechanistic versus abstract tasks

To detect most of the medication errors in this study, nurses had to mechanistically compare data from one source (eg, rate in ml/h on a drug label) against data from another source (eg, rate on an infusion pump in ml/h) to determine if the two matched. In contrast, detection of clinical decision errors required nurses to compare data from one tangible source (eg, dosage in mg from a physician's order) against their abstract clinical knowledge of chemotherapy protocols, in the context of a specific patient, to determine if all the details of the order were appropriate. Nurses were much better at detecting errors which required mechanistic tasks than those which required critical thought. This difference in performance is consistent with the idea that it is more difficult to detect strategic mistakes than tactical mistakes and that execution errors are more easily detected than method errors.24 ,25

Further, whereas the use of specific instructions helped nurses with mechanistic error-detection tasks, the addition of a general reminder to think critically and remember the ‘five rights’ of medication administration did not help with the abstract task. Expecting clinicians to mechanistically review several specific aspects of medication administration and then to switch to thinking abstractly about clinical appropriateness may be unrealistic.26 If abstract thinking tasks are essential to the final medication administration process, it may be necessary to separate these from the mechanistic tasks and their associated checklists, and to develop other strategies for supporting abstract clinical thought. Further research is needed in this area.

Independence of double checking

Our revised approach to double checking was designed to encourage nurses to review the pump settings before looking at the prescription details, thus ensuring ‘independence’ through the elimination of confirmation bias. We believed this approach would result in higher error-detection rates, but found no difference between the old and new checklists in detection of errors in pump programming.

Upon analysis, we realised that the new design did not in fact eliminate confirmation bias; instead, confirmation bias was shifted from the point of checking the pump to the point of checking the newly completed checklist against the prescription details. Hence, the nurses might still have seen what they expected to see on the order when they verified the (sometimes incorrect) values from the checklist against the prescription. More research is needed to determine how to reduce or eliminate confirmation bias from a human verification process.

For double checking processes, we feel that the most important factor is the completion, by a second individual, of a well-designed, easy-to-use checklist with a specific item for each specific high-risk error.

Developing a checklist for detecting errors

On the basis of our findings, we recommend the following steps when developing a checklist for detecting errors (table 3).

Table 3

Steps for developing a checklist for detecting errors

Study limitations

The clinical focus of this research was the detection by nurses of errors related to administration of chemotherapy and did not include other clinical areas. However, we anticipate that our findings and recommendations will be applicable to any context where a care provider must perform a series of mechanistic tasks under a high cognitive load.

For ethical reasons, we were not able to conduct this controlled experiment of error detection in a live clinical environment. Due to the short-term nature of the simulation, we could not determine the long-term effect of the new checklist on nurses' behaviour. Further, participants had been using the ‘old’ checklist in their unit for some time. We had expected nurses would detect more errors with the old checklist as a result of their familiarity with it, but this was not the case. Another limitation was that interruptions were not scripted but instead flowed naturally from the actors to create a sense of realism, and therefore were more subject to random error than with a script.

Conclusions

This research has highlighted that checklists incorporating specific step-by-step instructions are useful for detecting certain errors. Although technological systems can achieve a much higher accuracy and reliability than any human processes, fully integrated and automated systems for complex healthcare are not likely to exist in the near future, and we question whether this goal is ultimately achievable. As such, checklists remain a necessary safety tool for clinicians performing long series of mechanistic tasks and must be designed to support this activity. However, further research is needed to determine how best to assist clinicians in switching between mechanistic tasks and abstract problem solving.

Acknowledgments

The following people supported this research: H Colbert, A Tosine, D Incekol, J Stewart, S Ladak, R Lopez, A Chagpar, C Banez, S Savage, D Grosse Wentrup, C Masino, colleagues at the Centre for Global eHealth Innovation and the study participants.

References

Footnotes

  • Funding This research was conducted under a research grant from the Canadian Patient Safety Institute (Grant # RFA0506284).

  • Competing interests None.

  • Ethics approval Ethics approval was provided by the University Health Network, Research Ethics Board, Toronto, Ontario, Canada.

Linked Articles

  • Correction
    BMJ Publishing Group Ltd