Article Text

Computerised physician order entry-related medication errors: analysis of reported errors and vulnerability testing of current systems
  1. G D Schiff1,2,
  2. M G Amato1,3,
  3. T Eguale1,2,4,
  4. J J Boehne1,
  5. A Wright1,2,
  6. R Koppel5,
  7. A H Rashidee6,
  8. R B Elson7,
  9. D L Whitney8,
  10. T-T Thach1,
  11. D W Bates1,2,9,
  12. A C Seger1,3
  1. 1Brigham and Women's Hospital Division of General Medicine and Primary Care, Boston, Massachusetts, USA
  2. 2Harvard School of Medicine, Boston, Massachusetts, USA
  3. 3MCPHS University, Boston, Massachusetts, USA
  4. 4McGill University, Montreal, Quebec, Canada
  5. 5University of Pennsylvania, Philadelphia, Pennsylvania, USA
  6. 6Quantros, San Francisco, California, USA
  7. 7MetroHealth Center for HealthCare Research and Policy, Cleveland, Ohio, USA
  8. 8Baylor College of Medicine, Houston, Texas, USA
  9. 9Harvard School of Public Health, Boston, Massachusetts, USA
  1. Correspondence to Dr Gordon Schiff, Division of General Internal Medicine, Center for Patient Safety Research and Practice, Brigham and Women's Hospital, 1620 Tremont St. 3rd Fl, Boston, MA 02120, USA; gschiff{at}partners.org

Abstract

Importance Medication computerised provider order entry (CPOE) has been shown to decrease errors and is being widely adopted. However, CPOE also has potential for introducing or contributing to errors.

Objectives The objectives of this study are to (a) analyse medication error reports where CPOE was reported as a ‘contributing cause’ and (b) develop ‘use cases’ based on these reports to test vulnerability of current CPOE systems to these errors.

Methods A review of medication errors reported to United States Pharmacopeia MEDMARX reporting system was made, and a taxonomy was developed for CPOE-related errors. For each error we evaluated what went wrong and why and identified potential prevention strategies and recurring error scenarios. These scenarios were then used to test vulnerability of leading CPOE systems, asking typical users to enter these erroneous orders to assess the degree to which these problematic orders could be entered.

Results Between 2003 and 2010, 1.04 million medication errors were reported to MEDMARX, of which 63 040 were reported as CPOE related. A review of 10 060 CPOE-related cases was used to derive 101 codes describing what went wrong, 67 codes describing reasons why errors occurred, 73 codes describing potential prevention strategies and 21 codes describing recurring error scenarios. Ability to enter these erroneous order scenarios was tested on 13 CPOE systems at 16 sites. Overall, 298 (79.5%) of the erroneous orders were able to be entered including 100 (28.0%) being ‘easily’ placed, another 101 (28.3%) with only minor workarounds and no warnings.

Conclusions and relevance Medication error reports provide valuable information for understanding CPOE-related errors. Reports were useful for developing taxonomy and identifying recurring errors to which current CPOE systems are vulnerable. Enhanced monitoring, reporting and testing of CPOE systems are important to improve CPOE safety.

  • Patient safety
  • Decision support, computerized
  • Human error
  • Information technology
  • Medication safety

This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Computerised provider order entry (CPOE) has long been considered and demonstrated to be a high-leverage tool for preventing medication errors, and incentives are being provided to accelerate its adoption.1–3 However, there is a growing awareness and increasing documentation of concerns that CPOE can also introduce or facilitate new errors.4–6 The Institute of Medicine Committee report ‘Health IT and Patient Safety: Building Safer Systems for Better Care’ recognised that Health Information Technology (HIT) is part of a complex sociotechnical system and recommended investing in efforts to uncover and understand the vulnerabilities of HIT systems to errors and unintended consequences.7 More recently, the US Food and Drug Administration Safety and Innovation Act (FDASIA) similarly recommended developing similar approaches for reporting with a key recommendation advocating compilation of reports of errors across multiple systems.8

In 1999, the United States Pharmacopeia (USP) launched a pioneering online medication error reporting system that has now collected more than two million medication errors.9 In 2003, in response to the growing number of reports suggesting that CPOE was playing a role in the medication errors being reported, USP added a coded field for reporters to check off ‘CPOE’ as a contributing cause of the error. Shortly thereafter, the USP's MEDMARX annual report stated that computer entry and CPOE errors had become the third leading contributing cause being checked off in medication error submissions.10 However, since that initial report there has been no detailed investigation or analysis of CPOE-related MEDMARX error reports. Furthermore, the report narratives have not been assessed previously. To better understand how and why the errors were occurring, as well as ways they could have been prevented, we undertook a study of the USP MEDMARX CPOE medication error reports, subjecting these reports to a detailed review as well as testing the vulnerability of current CPOE systems to the types of errors identified.

The aims of this study are to (1) analyse the USP MEDMARX medication error reports where CPOE was checked off as a ‘contributing cause’ of the error by performing in-depth review of 10 000 of the error report narratives to understand details of each error and develop a new taxonomy for CPOE-related errors and (2) develop and test ‘use cases’ based on these actual error reports and assess the vulnerability of leading CPOE systems to these errors.

Methods

Phase I: MEDMARX data analysis—taxonomy development and coding

We queried the USP MEDMARX (now part of Quantros Safety and Risk Management suite) for all medication error reports from January 2003 to April 2010 that were coded by the error reporters as having ‘CPOE’ as one of the ‘contributing causes’ of the errors. These spontaneous error reports were submitted from institutions subscribing to the MEDMARX medication error reporting system. Reporters typically included a mix of centralised quality assurance staff (who collected reports from front-line staff and then entered reports for their institution into MEDMARX) and in a minority of institutions, front-line staff directly entering reports.

We identified a total of 63 040 medical error reports having CPOE checked off as a contributing cause in 1.04 million total reports. These reports served as the data for analysis of CPOE errors and taxonomy development. A total of 10 060 error reports were then manually reviewed, representing all 191 of the reports categorised as National Coordinating Council for Medication Error Reporting and Prevention (NCC MERP) outcome categories E–I (ie, categories where patient harm occurred, which we designated as highest priority to review in detail) plus a random sample of the remaining A–E category reports.11 Each of these 10 060 error reports was reviewed and coded by one of three clinical pharmacists (MGA, JJB and ACS) with an emphasis on detailed review of the free text narrative description of the error. For cases where there were questions, or consideration of new codes, the cases were re-reviewed in detail by the entire team of three pharmacists and a general internist (GDS). These codes grounded in the data and served as the basis for our taxonomy development (grounded qualitative analysis).12

The coding was done using a customised qualitative-coding software tool developed in Microsoft Access with codes progressively developed or added based on the error report narratives and iteratively refined via weekly meetings of the clinical review team. Each report was coded to categorise three elements of the error: (a) what happened, (b) why it happened and (c) potential prevention strategies. Pharmacist reviewers were instructed to code what and why exclusively based on information in the narrative and accompanying report. For potential prevention strategies, pharmacist investigators were encouraged to suggest ways the error could have been prevented based on the report and their knowledge of medication safety and information technology. To ensure conservative coding, when reports lacked sufficient information to determine a what, why and prevention classification, reviewers were instructed to assign them as ‘unknown’. Each case could be assigned one or more codes in each category. Inter-reviewer reliability was assessed using a 1% random sample of reviews to calculate a kappa statistic. Once the coding was completed, the codes were re-reviewed and reorganised using several card sorting and iterative team consensus exercises to group and refine the final taxonomy.

Phase II: CPOE vulnerability testing based on reported error scenarios

During the above qualitative review of the error reports, reviewers were instructed to flag cases that might serve as representative ‘test cases’ to assess whether errors identified could be replicated in current CPOE systems. A weighted scoring system based on error frequency, severity, generalisability and testability was used to narrow this list down to key error scenarios for testing. Based on this prioritisation, 21 test scenarios were chosen. Scenarios included erroneous or problematic CPOE orders related to wrong units, major overdoses, drug allergies, order element omission errors, wrong frequency and drug–disease contraindications, as well as three ‘correct’ but ‘complex’ test orders (eg, prednisone tapers, alternate-day dosing, non-formulary drug) that reports often suggested led to problem-prone workarounds such as potentially confusing free text comments in the drug order (see online supplementary appendix 1 for list of test case scenarios).

We identified a convenience sample of leading vendor and homegrown CPOE system test sites and obtained institutional and institutional review board permission to enter these problematic orders on test patients at these sites. To test each of these error scenarios, we recruited one to two typical users (mostly medical residents or primary care attending physicians) with at least 1-year experience with the CPOE system (range 1–8) and instructed them to enter the erroneous orders. Users understood that these orders were problematic but were instructed to proceed with placing these orders as they typically would, using any methods they might routinely use to enter a desired order. Outcomes of whether orders were successfully entered and behaviours of medical doctors and CPOE systems were recorded by physicians or pharmacists (GDS, TE, MGA and ACS) and research assistant (DLW) observers who rated the ease or difficulty of entering the erroneous or complex orders using predefined operational definitions (table 1).

Table 1

Operational definitions used to classify ease of entry of ‘error scenario’ test orders

Results

Phase I: qualitative review

Of 1.04 million reported errors, 63 040 (6.1%) were classified by the reporters as CPOE related. Our pharmacists reviewed and coded 10 060 (15.7% sample) of these 63 040 reports and derived a taxonomy that included 101 codes describing what occurred, 67 codes describing why errors occurred as well as 73 codes describing potential prevention strategies (see online supplementary appendix 2 for full taxonomy). Tables 24 summarise findings for the top 25 most frequent codes assigned for the what, why and prevention codes. Many reports lacked sufficient detail describing the error to permit adequate coding, particularly to classify why the error occurred. Although all of these reports had ‘CPOE’ checked off by the reporter as a contributing cause, our reviewers could determine the role of the CPOE system in only 5004 (49.8%) of the reports based on report content alone. Pharmacists’ inter-rater agreement rates and kappa scores for the taxonomy coding of what occurred and why the error occurred were 66%, kappa 0.56 (95% CI 0.39 to 0.72) and 64%, kappa 0.58 (95% CI 0.42 to 0.73), respectively.

Table 2

Top 25 what happened? Codes

Table 3

Top 25 why did it happen? Codes

Table 4

Top 25 prevention codes

Phase II results: CPOE vulnerability testing

Our pharmacist reviewers identified 338 error reports as potential candidate scenarios for testing the vulnerability of current systems. This list was narrowed to 21 scenarios by combining similar scenario types (ie, various drug overdosages; orders for drug to which patient was allergic) and prioritised based on preselected criteria of (a) frequency, (b) seriousness and (c) testability. These scenarios included five inpatient-only scenarios (eg, intravenous orders) that were not tested on outpatient systems, and three ‘complex’ orders (not errors per se, but designed based on reports that repeatedly arose from similar potentially error-prone orders reported). We recruited a convenience sample of 13 representative systems (four homegrown, one open source, eight commercial, including each of the leading inpatient and outpatient vendors) at 16 test sites. Not all tests could be performed on all systems (because of formulary and other design limitations). Excluding these scenarios, we entered a total of 375 erroneous orders during 24 testing sessions on 13 systems at 16 test sites.

Overall, 298 (79.5%) erroneous orders were able to be placed, including 100 (28.0%) being ‘easily’ placed (order simply accepted with no extra steps or warnings), with another 101 (28.3%) placed with only ‘minor workarounds’ (eg, adjusting default dosage, with no warnings). Thus, 201 (56.3%) of the errors could be relatively easily replicated (entered easily or with minor workarounds) with no warning or blocking of potentially dangerous orders. Table 5 lists the frequencies of how often erroneous orders were prevented versus went through easily or with some difficulty. Only 26.6% of orders generated specific warnings related to the erroneous order. Of these, 69% were passive alerts (information display only or easily over-ridden and/or ignored). Another 29% required workarounds but nonetheless, could still be entered. Notable failures included erroneous orders for pioglitazone accepted for patients with congestive heart failure in 87.5%, orders for insulin 60 ‘mL’ (rather than ‘units’) going through in 75.0% and no specific warnings for a 1000-fold levothyroxine overdose in 37.5% of attempts. Figure 1 illustrates a breakdown of which of the erroneous order scenarios had greater protection (ie, were more difficult to enter; higher mean scores overall) versus those where systems were generally more vulnerable (lower mean scores overall). Finally, for 40 of 72 (55.6%) of the error-prone more complex (eg, variable daily warfarin, prednisone taper) test orders, prescribers encountered problems or ordering difficulties, creating potentials for error-prone workarounds.

Table 5

Frequency distribution of erroneous orders going through, ease with which they went through, and whether there was a warning

Figure 1

Radar plot showing mean score for each test scenario across all tested computerised provider order entries in difficulty of entering erroneous orders. To maximise safety, the plot ideally should occupy the most outer grid (score 5); that is, impossible to enter the erroneous orders. For example, greatest protection was against 1000-fold overdose of levothyroxine; however, drug–disease contraindication checking had the lowest mean score indicating least protection, hence making it easier to enter this erroneous order.

Discussion

We analysed a large medication error database for errors that were reported as being related to CPOE and developed a new taxonomy of the types, causes and prevention strategies that we could identify in these reports, resulting in a number of useful insights regarding the frequency of specific error types. We then performed vulnerability testing to examine whether these errors could be replicated in current CPOE systems with the worrisome finding that the majority (overall, 56.3%) of the selected erroneous orders could be readily entered.

Report narratives provided both rich details of the types of errors that occur in CPOE systems and served as the basis for the development of a taxonomy that facilitated classification of the types. Leading CPOE-related errors included missing or erroneous sig (label directions) or patient instructions, wrong dose or strength, problems with wrong quantity or strength, scheduling problems (particularly related to inpatient orders and timing of stat vs continuing orders), delays in medication processing or administration due to confusing orders and wrong drug or wrong patient errors.

Many of these problems are not unique to CPOE and could also occur with handwritten ordering, although some, in theory (eg, drug overdosages), should be preventable with properly designed electronic systems.2 ,13–15 Reasons for these errors were discernible for roughly half of the error narrative reports and included problems with miscommunication between multiple electronic or hybrid paper-electronic systems, user issues such as failure to follow established protocols, inexperience or lack of training in using the CPOE system, typing and pull-down menu errors, medication reconciliation issues, ignoring or over-riding alerts and confusion related to or arising from comments fields.

Although it may be argued that many errors were isolated occurrences or perhaps based on the vulnerability of older systems, when we tested current systems, we found that current systems had high degrees of susceptibility to many of these errors. Nearly 80% of the erroneous orders could be entered, with more than half entered with little or no difficulty or warnings. More than a quarter (28.0%) of the orders were easily entered (in the words of our test physicians and research pharmacists, ‘sailed right through’), with no warnings or additional efforts on the part of the ordering physicians.

Systems that overalert or frustrate busy physicians attempting to enter appropriate orders are also an important problem and can lead to the so-called ‘alert fatigue’. Thus, systems need to balance ease of ordering with appropriate protections. Our study was not designed to determine the best balance, although better designed systems have the potential to achieve both better efficiency (eg, well-designed order sets) and improved signal-to-noise ratio (better ratio of appropriate to nuisance alerts).15–18 Shockingly, one of our test systems had zero warning alert fire in response to our erroneous test orders; we discovered that all alerts had been turned off for a system upgrade several months earlier and it was not until we performed our testing that it was discovered they had not been reinstated. We documented a high degree of variability of vulnerability and alerting from system to system. We even observed variations in different implementations of the same system or even different users entering orders in different ways at the same site. This is similar to the findings when the Leapfrog tool was used to test potential errors, revealing wide variations in detection of test orders that would cause adverse events with varied local implementations, even of the same vendor's systems.19

From the policy perspective, one approach would be to regulate electronic health records and/or clinical decision support. Another would be to allow vendors to continue the current approach in which there is relatively little regulation but to improve postmarket surveillance. The FDASIA committee has recommended the latter approach to the Food and Drug Administration (FDA), Office of the National Coordinator and Federal Communication Commission.8 The data from our study suggest that it is possible to aggregate large numbers of reports across multiple vendors and draw useful conclusions.

Our study was limited by the fact that the reported CPOE-related errors were based on spontaneous self-reporting of medication errors. Thus, no conclusions can be drawn about the actual incidence or relative frequency of these errors and problems. In addition to the well-known problem of under-reporting from spontaneous reporting systems, there are problems with the quality and non-verifiability of the reports we studied.20 ,21 Many reports were incomplete, lacked details of the role CPOE played in the error (ie, only that they had the box checked that CPOE was contributing factor). Thus, as instructed, our reviewers conservatively coded these reports as ‘unknown’, for what happened and why, making this a leading category. Despite these limitations of the quantity and quality of these spontaneous reports, the errors speak for themselves as noteworthy and likely (particularly based on our vulnerability testing results) representative CPOE safety issues. This points to the need to improve the quality of such reports and perhaps standardise collection of contributing causal factors. Additional potential limitations are that, reports and subscribers to MEDMARX may not be representative of all prescribing systems, and outpatient reports were under-represented as subscribers were mainly hospitals and hospital systems. Many of the reports were nearly a decade old, although our efforts to replicate these errors demonstrate that vulnerabilities also exist in current systems. Our study did not examine the likelihood that the erroneous orders placed would be intercepted by pharmacists or other staff and hence, not cause harm. Nonetheless, the MEDMARX reports contained many examples of where errors did reach or harm patients. Finally, our qualitative pharmacist coding and rating of the error scenarios was based on subjective reviewer judgement. To offset this, we assigned clear operational definitions to the codes as they were developed, had a consensus process for adjudicating questions or disagreement and achieved reasonably good inter-reviewer reliability scores in assessment of the reports. We have continued to refine this taxonomy for a white paper to be published by the US FDA that can help guide future research as well as organisations analysing CPOE-HIT error reports in the future.

In conclusion, we reviewed error reports to identify patterns of CPOE-related errors and used them to develop a new taxonomy and recurring error scenarios. We then tested current systems and found areas of noteworthy vulnerability. Developers and users need to be aware of this potential for error and should build in protection strategies at multiple levels to learn from and protect patients by continuously improving the safety of CPOE systems.22–25 Efforts that permit both better reporting and awareness of medication errors as well as testing the vulnerabilities of local CPOE systems are needed; such efforts are crucial to safe prescribing, ongoing postimplementation monitoring and improvement of CPOE systems.

Acknowledgments

We gratefully acknowledge the National Patient Safety Foundation (NPSF) for funding for this study, as well as United States Pharmacopeia and Quantros for providing the MEDMARX error report data. We also acknowledge additional assistance received from Dr Sarah Slight in reviewing the data and taxonomy.

References

Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

    Files in this Data Supplement:

Footnotes

  • Twitter Follow David Bates at @dbatessafety

  • Contributors GDS, MGA and TE: contributed to conception and design, analysis and interpretation of the data and results, drafting and writing the article, revising it critically for important intellectual content and final approval of the version to be published. JJB, AW, RK, AHR, RBE and ACS: contributed to supervision of the paper, developed hypotheses, analysis and interpretation of the data and results, as well as drafting and writing the article, revising it critically for important intellectual content and final approval of the version to be published. DLW, T-TT and DWB: contributed to interpretation of the data and revising the paper critically for important intellectual and content. None of the sponsors had a role in the design and conduct of the study, collection management, analysis and interpretation of the data; preparation, review or approval of the manuscript and decision to submit the manuscript for publication.

  • Competing interests GDS is currently working on an FDA-funded grant investigating CPOE systems and medication safety. RK works with a company, Wearable Intelligence that is bringing Google Glass to healthcare, specifically for use in rounding, with Emergency Medical Systems crew, for handoffs, for use in the operating room to display what the surgeon is viewing and for alerting clinicians. There is no direct competition and it does not affect the issues discussed in our article, but it is a role in an HIT effort. DWB is a coinventor on patent no. 6029138 held by Brigham and Women's Hospital on the use of decision support software for medical management, licensed to the Medicalis Corporation. He holds a minority equity position in the privately held company Medicalis which develops web-based decision support for radiology test ordering. He consults for EarlySense, which makes patient safety monitoring systems.

  • Provenance and peer review Not commissioned; internally peer reviewed.

Linked Articles