Background Electronic health records (EHR) can improve safety via computerised physician order entry with clinical decision support, designed in part to alert providers and prevent potential adverse drug events at entry and before they reach the patient. However, early evidence suggested performance at preventing adverse drug events was mixed.
Methods We used data from a national, longitudinal sample of 1527 hospitals in the USA from 2009 to 2016 who took a safety performance assessment test using simulated medication orders to test how well their EHR prevented medication errors with potential for patient harm. We calculated the descriptive statistics on performance on the assessment over time, by years of hospital experience with the test and across hospital characteristics. Finally, we used ordinary least squares regression to identify hospital characteristics associated with higher test performance.
Results The average hospital EHR system correctly prevented only 54.0% of potential adverse drug events tested on the 44-order safety performance assessment in 2009; this rose to 61.6% in 2016. Hospitals that took the assessment multiple times performed better in subsequent years than those taking the test the first time, from 55.2% in the first year of test experience to 70.3% in the eighth, suggesting efforts to participate in voluntary self-assessment and improvement may be helpful in improving medication safety performance.
Conclusion Hospital medication order safety performance has improved over time but is far from perfect. The specifics of EHR medication safety implementation and improvement play a key role in realising the benefits of computerising prescribing, as organisations have substantial latitude in terms of what they implement. Intentional quality improvement efforts appear to be a critical part of high safety performance and may indicate the importance of a culture of safety.
- decision support, computerised
- hospital medicine
- information technology
- medication safety
- patient safety
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
- decision support, computerised
- hospital medicine
- information technology
- medication safety
- patient safety
In the 10 years since the Health Information Technology for Clinical and Economic Health (HITECH) Act passed in 2009 in the USA, electronic health records (EHR) have become an integral tool for improving patient safety.1 2 One main mechanism by which EHRs can improve patient safety is the implementation of computerised provider order entry (CPOE), coupled with clinical decision support tools. In particular, computerisation of drug ordering linked with clinical decision support can reduce adverse drug event rates.3
CPOE allows physicians and other clinical care providers to write orders for hospitalised patients electronically, rather than via verbal or written communication. This allows clinical decision support to be given at the point of care and it can encourage clinicians to make better medication choices for patients, and also make them aware of potential safety issues related to the order—for example, potential allergic reactions or drug-drug interactions that may cause adverse effects for patients. Since medication errors can lead to serious patient harm and even mortality, preventing these adverse drug events represents a first order concern in improving patient safety.4–6 Intervention at the point of care for safer medication use can stop patient harm before it occurs.
While adoption of some form of EHR among US hospitals has become nearly ubiquitous,7 CPOE safety results have been mixed. Many early studies that showed safety benefit were done with internally developed EHRs,4 6 8 9 while nearly all implementations today are of commercial vendor products.
But picking a specific vendor does not guarantee good medication safety results. There was significant variation in safety performance across both hospital demographics and EHR vendors in an early study in 62 hospitals, and surprisingly there was almost no association with vendor.10 Hospitals have tremendous latitude with respect to what they actually implement with any vendor, especially around decision support. It is therefore clear that EHR adoption does not necessarily lead to patient safety improvements—the structural quality improvement (EHR and CPOE implementation) does not necessarily lead to the process quality improvement (ensuring CPOE systems successfully prevent adverse drug events).11 CPOE systems are embedded within complex sociotechnical environments that require intentional quality improvement efforts to manifest a reduction in adverse drug events and an improvement in patient safety.8 Similarly, recent work has shown that EHR adoption has a positive effect on patient mortality, but only after a period of acclimation and system maturation.12 Other studies have found that CPOE system may actually contribute to medical errors.13
Given the variation in studies coming from one or a few institutions, there is a need to evaluate CPOE performance on a larger scale and over time, in a much broader array of institutions including recent adopters, to determine whether the substantial public investment in EHRs appears to be driving patient safety improvements and where. To do this, we sought to answer three research questions: first, how effective are hospital CPOE systems at alerting providers to potential adverse drug events—medication errors with the potential for harm? Second, has hospital CPOE safety performance improved over time as hospitals gain experience with the technology? Finally, what hospital characteristics are associated with better CPOE safety performance? To answer these questions, we used national results from a CPOE safety performance assessment results from a sample of 1527 hospitals over the period of 2009–2016.
Design of the CPOE assessment tool
The CPOE assessment test was designed by investigators at the Brigham and Women’s Hospital and the University of Utah and used by the Leapfrog Group, an employer group whose mission is to encourage improvement to healthcare quality via ‘big leaps’ in patient safety.14 15 The CPOE assessment test is included as part of the Leapfrog Annual Safe Practices Survey, an annual survey distributed nationally to US hospitals with results reported publicly.16 The CPOE assessment test is one of several process quality categories used by the Leapfrog Group in evaluating whether hospitals meet their quality standard.
The Leapfrog CPOE assessment test uses simulated patients and medication orders to mimic the experience of a physician writing orders for actual patients to evaluate safety performance. The simulated test patients and orders were developed by a group of experts on adverse drug events and computerised order entry clinical decision support in order to test how effectively hospital CPOE systems prevent potential adverse drug events. The simulated orders were developed specifically to test the potential adverse drug events that were likely to cause the most serious patient harm using cases from studies that linked actual adverse drug events back to failures in the ordering process.10 17 The simulated patients and orders were meant to test whether hospital CPOE systems would alert providers to a variety of potential adverse drug events including drug allergy, drug dose (for both single and daily orders), drug renal status, drug diagnosis, drug age and drug-drug contraindications. These alerts could be decision support alerts or they could be a hard stop used to prevent a drug from being ordered by an inappropriate route, dose or frequency.17 Most of the simulated orders represent historical preventable adverse drug events from real patients who suffered injuries as the result of the associated medication errors. The primary outcome measure was whether the hospital CPOE system correctly prevented a potential adverse drug event, either by generating an alert on upon entering a simulated order or preventing the entry of an order in another way, such as a hard stop that does not allow order entry. The test changed slightly from 2010 to 2011, when the number of simulated orders was increased from 44 to 53. Further detail on the early design and pilot testing of the Leapfrog CPOE assessment test is discussed in detail by Metzger et al and others.10 18 Performance on the CPOE assessment test has shown to be well correlated with rates of preventable adverse drug events in hospitals.19 Example screenshots from a training version of the CPOE assessment test are available in online supplementary appendix figure 1.
Use of the CPOE assessment tool
Designated teams of hospital employees in each hospital performed the CPOE assessment test as a self-assessment each year as part of the Leapfrog Annual Practices Survey. The teams downloaded instructions and the profiles of test patients, as well as test orders. A physician entered the test orders for the simulated patients into the hospital’s EHR system and records all clinical decision support alerts that were generated as well as other methods of preventing orders that could cause adverse drug events. The team then enters the results (whether the hospital CPOE system did or did not generate a clinical decision support alert, or included a hard stop preventing ordering of the specified drug by inappropriate route, dose or frequency). The assessment tool then calculated and reported the overall score, as well as scores by category, back to the hospital team, but not details on the correct answer to each individual question. The categories of potential adverse drug events tested were: drug age, drug allergy, drug dose, drug labs, drug renal, drug route, drug diagnosis, and drug-drug and therapeutic duplication contraindications. The score data are also collected by the Leapfrog Group.
Data collection and sample
Our sample included hospitals who took the Leapfrog Annual Practices Survey, including the CPOE assessment test, in at least 1 year from 2009 through 2016. Hospitals who began taking the test but did not fully complete it in a given year were marked as incomplete and excluded for that year. If a hospital took the test multiple times in a single year, we kept the highest overall scoring test. If a test order was not on the hospital’s formulary or otherwise not prescribed, it was omitted from the denominator. If a hospital’s EHR system did not have a specific alert functionality for that category, however, the test orders were not omitted.
Our final analytical sample included data from 1527 hospitals in an unbalanced panel, which included all hospitals with at least one completed test from 2009 through 2016 regardless of how many years they completed the test, for a total of 5107 unique hospital-year observations. This sample represents 24.6% of the 6210 US hospitals.20 We then linked these results with data from the American Hospital Association Annual Survey from 2009 to 2016 to capture hospital demographic information.21 Hospitals were matched based on their Medicare number in each year.
We selected a set of hospital characteristics that we expected to be associated with hospital CPOE performance based on previous studies of health information technology adoption.22 These included size as measured by number of beds, membership in a healthcare system, teaching status, ownership (including private, for-profit; private, non-profit; and public, non-federal hospitals) location in an urban or rural location, as well as geographic region within the USA based on the four US census areas (the North-East, West, Midwest and South). We also included three other measures of hospital quality we expected to be associated with CPOE safety performance: accreditation by the Joint Commission,23 star ratings by Medicare’s Hospital Compare system for 201624 and hospital-acquired condition scores from the Center for Medicare and Medicaid Services (CMS).25
We first calculated a set of sample descriptive statistics, including the mean, minimum and maximum, and SD for hospital CPOE performance scores, number of years the hospital has taken the CPOE assessment test and our selected set of hospital demographics. Next, we calculated the mean CPOE assessment test scores over time from 2009 through 2016. Then, we calculated the mean CPOE performance scores by hospital test experience: whether the hospital was in the first, second, third, fourth, fifth, sixth, seventh or eighth year of taking the test. We then created a bivariate comparison of CPOE test scores by hospital demographics.
Finally, we created a multivariate ordinary least squares (OLS) regression model with hospital CPOE test scores as our dependent variable and our set of hospital demographics as independent variables, as well as number of years the hospital had participated in the CPOE simulation test. Our model also included hospital random effects and year fixed effects, as well as robust SEs clustered at the hospital level. We also ran several models as robustness checks: one using an OLS model without random effects, and two using cross-sectional OLS regression on only the 2016 results. Our first cross-sectional 2016 model included an independent variable for Medicare Hospital Compare ratings overall star ratings as our predictor of interest. The second included an independent variable for hospital-acquired condition score from CMS as our predictor of interest.
Across all 5107 CPOE performance test results in our sample, the mean score was 59.3% correct, with an SD of 16.9%. The mean number of years a hospital had participated in the simulation was 2.71 (table 1). Most (88.5%) hospital-year observations in our sample were accredited by the Joint Commission. Most (57.2%) hospital-year observations were from medium-sized hospitals, between 100 and 399 beds, followed by large hospitals over 400 beds (22.7%) and small hospitals with fewer than 100 beds (20.1%). A large proportion of hospital-years in our sample were members of a healthcare system (78.8%), and 47.1% were teaching hospitals. Most of our hospital-year observations were from private, non-profit hospitals (71.5%), followed by private, for-profit hospitals (18.4%) and public, non-federal hospitals (10.1%). Most were in an urban area (75.5%), and 20.8% were in the North-East, 23.6% in the West, 20.0% in the Midwest and 35.5% in the South.
CPOE safety performance over time
The mean CPOE performance test result increased over time, starting at 54.0% in 2009, and rising over time: 57.1% in 2010, 56.8% in 2011, 58.1% in 2012, 58.7% in 2013, 58.7% in 2014, 60.2% in 2015 and, finally, 61.6% in 2016 (figure 1). From 2009 to 2016, the mean score improved by 7.6 percentage points.
Performance on the CPOE simulation test also increased with hospital experience. Hospitals taking the test for the first time scored had a mean score of 55.2%, compared with 58.1% on second tests, 59.1% on third, 62.7% on fourth tests, 65.4% on fifth, 67.8% on sixth, 70.4% on seventh and 70.3% on hospitals taking the test an eighth time (figure 2).
CPOE safety performance by hospital characteristics
Comparing hospitals’ CPOE safety performance score across hospital characteristics, we found that Joint Commission-accredited hospitals had slightly lower scores (59.2%, compared with 60.4% for non-Joint Commission-accredited hospitals). Small hospitals with fewer than 100 beds had a mean score of 60.5%, followed by medium-sized hospitals with 100–399 beds with a mean score of 59.3% and large hospitals with 400 or more beds at 58.5%. Hospitals that were members of a healthcare system had a mean score of 59.2% compared with 59.8% for non-system hospitals. Private, non-profit hospitals had the highest mean score of 59.7%, followed by public, non-federal hospitals at 59.3% and private, for-profit hospitals at 58.2%. Rural hospitals had a mean CPOE safety score of 60.2% compared with urban hospitals with a mean score of 59.1%. Finally, when comparing hospitals across geographic regions, those located in the North-eastern United States had the highest mean score (62.0%), followed by those in the West (61.4%), the Midwest (58.6%) and the South (56.8%) (table 2).
In the multivariate regression results, we found that each year of experience in taking the test was significantly associated with a 1.9% increase in CPOE safety performance scores (p<0.001). Several hospital characteristics were significantly associated with lower CPOE safety performance: Joint Commission accreditation compared with non-accredited hospitals (β=−2.50, p=0.01), and location in the Midwest (β=−3.57, p<0.001) or South (β=−4.93, p<0.001) compared with hospitals in the North-East (table 3).
We ran two other models as robustness checks. In our first model, the same multivariate OLS regression but without the hospital-level random effects, only hospital-level clustered robust SEs, we found overall similar results, with each year in the test significantly associated with a 2.53% increase in CPOE safety performance score (p<0.001) (online supplementary appendix table 1). In the cross-sectional multivariate models using only 2016 data including Medicare Hospital Compare data, we found that for overall star rankings, each increase in star ranking was associated with a small decrease in CPOE safety performance score (β=−1.31, p=0.03) (online supplementary appendix table 2). For our analysis on hospital-acquired conditions, we found no relationship between CPOE safety performance and hospital-acquired condition score (β=−0.17, p=0.53) (online supplementary appendix table 3).
There is widespread concern that EHRs have not delivered on their promised benefits to improve quality and safety. Given the high cost of adoption, it is important to evaluate whether EHRs are driving quality improvement. This is especially relevant in light of concerns over the negative impact of EHRs, such as physician dissatisfaction with the systems, issues with the transition to commercial EHR systems26 and EHR-imposed ‘alert fatigue’, which can also have negative consequences for patient outcomes.27 Ensuring that EHRs improve quality and safety to the best of their ability is critical given these very real costs.
Our results show that while EHR safety performance has improved from 2009 to 2016, there is still significant progress to be made. While the mean hospital score improved from 54.0% to 61.6% from 2009 to 2016, the average hospital clinical decision support enabled CPOE system is still properly preventing fewer than two-thirds of potentially dangerous adverse drug events. Moreover, even hospitals that had taken the test eight times alerted or otherwise prevented providers to only about 7 in 10 potential adverse drug events, showing additional room for improvement. The results of this national, longitudinal study of CPOE performance underscore the need for continuing efforts to improve EHR safety performance.
While hospitals who took the test in multiple years showed higher performance in subsequent years than those taking it for the first time, few other hospital characteristics were associated with higher performance. Performance on the CPOE safety assessment was not positively correlated with other common measures of hospital quality, including accreditation by the Joint Commission, Hospital Compare star ratings or hospital-acquired condition scores. Our study contributes to the emerging evidence suggesting that many quality measures do not capture a full picture of hospital performance. Hospital-acquired condition scores and Joint Commission accreditation have recently come under significant scrutiny for their usefulness in evaluating hospital quality.28 29 Hospital Compare stars may be too coarse of a measure to correlate with our CPOE assessment results and have their own set of limitations.30 Given that prior studies have shown that Leapfrog CPOE assessment scores are correlated with observed rates of adverse drug events in hospitals,19 we believe the Leapfrog CPOE assessment captures an important dimension of process quality that is not being observed by other common quality ratings that focus on structural or outcome quality. Quality of care is complex and multidimensional, and the Leapfrog CPOE performance assessment is only one aspect of a holistic quality measurement programme. Combined with other metrics, the Leapfrog CPOE safety evaluation tool represents an important part of a holistic quality assessment programme.
Our results show that hospitals taking the test in subsequent years were associated with higher scores is important, though it would be expected, and suggests that a key enabler of better safety performance is an intentional ongoing effort to improve quality. Combined with previous studies suggesting significant within-vendor quality variation across a range of outcomes,10 31 our results build on the theory that suggests EHR implementation alone, especially if that implementation includes only limited clinical decision support to prevent medication errors, is not enough to achieve safety improvement. Improving patient safety likely requires intentional efforts to improve both technical systems and sociotechnical processes, and that in turn requires workflows and providers to acclimate to the implementation of new systems. Hospital decisions to take the test multiple times may also reflect a dedication to a culture of safety, as ongoing assessment may lead to organisational learning and quality improvement, and safety culture is an important enabler in reducing preventable medical errors.32 For a hospital looking to set the standard in patient safety, simply having an EHR is nowhere near as important as how it is used and in this specific instance how the organisation manages their medication-related decision support.
Our study has several limitations. First, while we have a large sample of hospitals around the nation, we are limited to those who completed the Leapfrog Annual Practices Survey. This may result in a sample that is not representative of all the US hospitals. If this were the case, we would expect our results to be biased upwards as hospitals interested in quality improvement are more likely to participate in a voluntary quality programme. Second, while efforts were made to keep the CPOE performance assessment tool consistent throughout the study period, several small changes were made to keep the tool up to date, though we did not see any significant changes in hospital scoring in years when the test changed. Third, our study measured process quality, whether the CPOE system performed correctly, rather than direct patient outcomes, though better performance on the test has been shown to be correlated with lower rates of preventable adverse drug events. Similarly, not every potential adverse drug event results in patient harm, though adverse drug events have been shown to lead to patient harm and increased cost.33 Finally, the results of the study are descriptive and do not attempt to determine any hospital characteristics that are causally linked to differential levels of CPOE safety performance.
Ten years after the HITECH Act, EHRs still have significant room for improvement when it comes to patient safety. In a national study of 1152 hospitals taking a CPOE safety performance assessment that used simulated medication orders to evaluate how well their EHR prevented potential errors, we found that the average hospital EHR prevented 54.0% of potential errors in 2009, rising to 61.6% in 2016.
While hospital performance on the Leapfrog CPOE assessment test improved from 2009 to 2016, decision support systems still prevented fewer than two-thirds of potential adverse drug events in simulated patient orders. Hospitals that took the assessment multiple times performed better in subsequent years than hospitals taking the test for the first time, suggesting that intentional efforts towards quality improvement are an important enabler of higher safety performance. However, even hospitals that had taken the test eight times averaged only about 70%. While EHR performance at preventing potential medication errors improved over the course of our study, there remains significant room for additional progress, suggesting that tests of implementation such as this one are essential.
Contributors All authors contributed to the study through either data collection, analysis, drafting and supervision, and met the criteria for authorship per ICMJE guidelines.
Funding This study was supported by Agency for Healthcare Research and Quality (R01HS023696).
Competing interests DB reported consulting for EarlySense, which makes patient safety monitoring systems; receiving cash compensation from CDI (Negev), which is a not-for-profit incubator for health internet technology start-ups; receiving equity from Valera Health, which makes software to help patients with chronic diseases; CLEW, which makes software to support clinical decision-making in intensive care; and MDClone, which takes clinical data and produces deidentified versions of it. DC is an employee of Pascal Metrics, a federally certified patient safety organisation.
Patient consent for publication Not required.
Ethics approval Brigham and Women's Hospital IRB approval for study 2014P001614.
Provenance and peer review Not commissioned; externally peer reviewed.
Data availability statement Data may be obtained from a third party and are not publicly available.