Article Text

Download PDFPDF

Use of simulation to assess electronic health record safety in the intensive care unit: a pilot study
  1. Christopher A March,
  2. David Steiger1,
  3. Gretchen Scholl2,
  4. Vishnu Mohan3,
  5. William R Hersh3,
  6. Jeffrey A Gold2
  1. 1Department of Hospital Medicine, Oregon Health and Science University, Portland, Oregon, USA
  2. 2Department of Pulmonary and Critical Care Medicine, Oregon Health and Science University, Portland, Oregon, USA
  3. 3Department of Medical Informatics & Clinical Epidemiology, Oregon Health and Science University, Portland, Oregon, USA
  1. Correspondence to Dr Jeffrey A Gold; goldje{at}ohsu.edu

Abstract

Objective To establish the role of high-fidelity simulation training to test the efficacy and safety of the electronic health record (EHR)–user interface within the intensive care unit (ICU) environment.

Design Prospective pilot study.

Setting Medical ICU in an academic medical centre.

Participants Postgraduate medical trainees.

Interventions A 5-day-simulated ICU patient was developed in the EHR including labs, hourly vitals, medication administration, ventilator settings, nursing and notes. Fourteen medical issues requiring recognition and subsequent changes in management were included. Issues were chosen based on their frequency of occurrence within the ICU and their ability to test different aspects of the EHR–user interface. ICU residents, blinded to the presence of medical errors within the case, were provided a sign-out and given 10 min to review the case in the EHR. They then presented the case with their management suggestions to an attending physician. Participants were graded on the number of issues identified. All participants were provided with immediate feedback upon completion of the simulation.

Primary and secondary outcomes To determine the frequency of error recognition in an EHR simulation. To determine factors associated with improved performance in the simulation.

Results 38 participants including 9 interns, 10 residents and 19 fellows were tested. The average error recognition rate was 41% (range 6–73%), which increased slightly with the level of training (35%, 41% and 50% for interns, residents, and fellows, respectively). Over-sedation was the least-recognised error (16%); poor glycemic control was most often recognised (68%). Only 32% of the participants recognised inappropriate antibiotic dosing. Performance correlated with the total number of screens used (p=0.03).

Conclusions Despite development of comprehensive EHRs, there remain significant gaps in identifying dangerous medical management issues. This gap remains despite high levels of medical training, suggesting that EHR-specific training may be beneficial. Simulation provides a novel tool in order to both identify these gaps as well as foster EHR-specific training.

  • MEDICAL EDUCATION & TRAINING

This is an open-access article distributed under the terms of the Creative Commons Attribution Non-commercial License, which permits use, distribution, and reproduction in any medium, provided the original work is properly cited, the use is non commercial and is otherwise in compliance with the license. See: http://creativecommons.org/licenses/by-nc/3.0/ and http://creativecommons.org/licenses/by-nc/3.0/legalcode

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Article summary

Article focus

  • Developing a simulation environment to test ability of providers to recognise medical errors in the EHR.

  • To establish the reproducibility of EHR-based simulation testing.

  • To understand the types of medical errors/patient trends which are not recognised by the average user of the EHR.

Key messages

  • Average users of the EHR, irrespective of the level of training, have a poor rate of recognising disturbing trends in patient condition or medical errors.

  • Simulation testing will allow for a structured way to both restructure EHR education as well as redesign.

  • Issues related to the EHR–user interface are magnified by the data-rich intensive care unit environment.

Strengths and limitations of this study

  • The study demonstrates the feasibility of using EHR simulation to identify patient safety and quality issues related to the EHR/user interface.

  • The study provides a framework to test how new educational techniques or EHR interface design can improve patient safety and error recognition.

  • This pilot study does not address whether participation in the simulation itself improves the provider use of the EHR.

Introduction

Use of the electronic health record (EHR) is growing in the USA, spurred by financial incentives from the American Recovery and Reinvestment Act (ARRA).1 ,2 A growing body of research demonstrates that EHRs provide a myriad of benefits, including increased adherence to guideline-based care, decreased prescribing errors and improved disease monitoring.3–5 There has been a significant rise in EHR use across the country, with a near tripling in the number of hospitals using any form of EHR during the first decade of the 21st century.6 ,7 By the end of 2011, EHR adoption had increased to over 50% of all US physicians, stimulated by $2.5 billion in incentives paid out under the Health Information Technology for Clinical and Economic Health (HITECH) Act of ARRA.8 As even more healthcare systems transition to EHRs, there will be an increasing need for the development of new methods to effectively train healthcare providers, particularly with respect to maximising the functionality of the EHR as a clinical tool.

While EHRs can offer significant benefits, they can also foster errors in ways that paper documentation did not, a phenomenon that has been termed ‘e-iatrogenesis’.9 At the most fundamental level, EHR software itself can be poorly designed and may promote errors such as radiation overdosing or miscalculating patient medication doses.10 Medication ordering and monitoring appear to be particularly vulnerable to errors in the EHR. Duplicate medication orders, as well as drug dosing and monitoring errors have been shown to increase in the post-EHR era.11 ,12 More complex types of errors arise from the way clinicians interface with the EHR; many of these errors were unforeseen prior to implementation of these systems.13 The complexity of EHR implementations has often led to unintended consequences and errors and recent studies have evaluated the concept of fragmentation of the ‘big picture’ of a patient's trajectory by the vast amount of information displayed in a patient's electronic record and the resultant data overload inflicted on the clinician’s cognitive process.14 ,15

In November 2011, the Institute of Medicine (IOM) released a report on the safety of health information technology (HIT)16 that detailed challenges associated with the safe implementation of HIT. The report documented both the predictable and the unintended consequences of EHRs. This report also developed a taxonomy for classification of errors with categories that included data fragmentation, over-completeness (including excessive redundancy and copy-and-paste), errors in data recognition and perhaps most importantly, cognitive errors.16 ,17 The latter arise when users are unable to effectively process data to make appropriate decisions due to the method by which data are presented within the EHR.18

These safety issues are perhaps most relevant in the intensive care unit (ICU), where a 24 h cycle typically generates over 1300 new data points in the health record for an average patient.19 Many of the reports of increased errors, patient morbidity and the failure to successfully implement EHRs have come from the ICU environment.20 ,21 In an attempt to address this problem, Ahmed et al described a new EHR interface for their ICU designed to present data in a context-specific and streamlined manner. It was successful at reducing both the total amount of errors per provider and the ‘task-load’ index, an indirect measure of data overload.22 Unfortunately, most institutions do not have the expertise or resources to design their own EHR interface, instead relying on commercial systems.

Adequately training providers is a key component which may improve EHR safety. Studies document that physician training in EHR use is currently suboptimal. Underwood et al demonstrated that while at least 3–5 days of training was required for physicians to report the highest levels of satisfaction, nearly half the physicians studied (49.3%) revealed that they had received three or fewer days of training. Interestingly, respondent ratings on the ease of use for meaningful use measures continued to improve with more than 2 weeks of training.23 The IOM and the American Medical Informatics Association (AMIA) have identified EHR development, implementation and training as key areas for new research to improve healthcare quality and safety.16 ,24

In spite of the growth of medical simulation and the increasing emphasis on high-fidelity simulation, little has been performed with EHR-specific simulation training. Simulation training is particularly attractive as it conveys no risk to patients, maintains patient privacy and allows a highly specific and reproducible training environment that can be tailored to the needs of learners and healthcare organisations.25 In order for full task training via simulation to be effective, however, there must be specific attention given to creating psychological and functional fidelity, that is, recreating the true ‘feel’ of the goal environment.26 The few studies on EHR simulation have not been in the ICU nor have they truly tested physician ability to recognise and process information (as opposed to order entry).27 ,28 Barnato et al were successful in creating a realistic simulated ICU environment to test decision-making variability in patient triage.29 However, in their study, the EHR was utilised as a tool within the simulation as opposed to the focus of the simulation exercise itself.

The goal of our study was to create a highly realistic and complex-simulated ICU patient encounter in the EHR. We developed this simulation as a pilot as part of a longer term goal to teach effective use of the EHR in the ICU to identify common EHR error types such as medication monitoring errors and failure to identify concerning trends in laboratory or vital statistics data, and to help physicians cope with data fragmentation/overload.

Methods

Ethics statement

The study was approved by the Oregon Health and Science University Institutional Review Board (IRB). The study was deemed minimal risk and informed consent was not required. All participants were provided an IRB-approved information sheet about the protocol. All data were de-identified and stored in a secure file. The authors are willing to share any and all data obtained from this research. They will be available via email to the corresponding author.

For the study, a new training environment was created within our enterprise-wide EHR (EPIC Care; Epic Systems, Madison, Wisconsin, USA) that allowed the generation of patient cases with multiple consecutive days of patient data. This was in contrast to the previous training environment that supported only single-day encounters as all data were deleted at the end of each day. The new training environment was an exact replica of the physician’s current practice environment; any user-specific settings and customisations generated in actual patient care were retained in the simulation environment (eg, individual preference lists, screen view settings, etc).

Within this new environment, we created a multiday-simulated Medical ICU (MICU) patient case, which detailed the clinical course of a 74-year-old patient with diabetics admitted in septic shock with resulting acute renal failure and acute respiratory distress syndrome (ARDS) requiring mechanical ventilation. The patient improved clinically over the initial 48 h, including resolution of renal failure, shock and fever. Recurrent sepsis developed on the fifth hospital day, presumably due to an inadequate antibiotic dose in the setting of normalisation of renal function. The case was made as robust as possible and included hourly vital signs, a full medication administration report (MAR) including as-needed (PRN) medications, a detailed hourly intake/output report, and nursing, resident, attending and respiratory therapy notes.

The case was designed with the central theme of determining whether a diagnosis of recurrent sepsis would be made. We chose sepsis as the focus because of its high prevalence (it is the leading cause of death in the ICU), the fact that a significant percentage of physicians believe that this diagnosis is missed in patients and epidemiological studies that suggest many patients experience a delay in diagnosis which is associated with worse outcomes.30 ,31 Aside from the physiological and laboratory data associated with the diagnosis, we built in additional errors which we identified after integrating discussion of EHR use into our weekly MICU Morbidity and Mortality conference as occurring at a high frequency. The total number of errors/patient trends within the case was typical for patients with significant missed clinical deterioration, particularly in those cases where clinical decision-making did not meet best practices. In total, 14 individual action items were built in the case that could be grouped into the following three categories: (1) dangerous trends in lab results or vital signs (eg, 25% reduction in blood pressure with tachycardia and leukocytosis), (2) clear medication errors (eg, incorrect antibiotic dose for renal function) and (3) failure to adhere to institutional or national best practices across critical care (eg, attention to items that are covered by the ‘FAST HUG’ (Feeding, Analgesia, Sedation, Thromboembolic prevention, Ulcer prophylaxis, Head of Bed elevation and Glycemic Control)).32 Table 1 presents a complete list with definitions of the errors included in the case as well as the type of error occurring at the EHR–user interface representing each specific item and specifically in relation to our institution's specific EHR.

Table 1

Fourteen errors developed throughout the 5-day ICU course

The simulated case was then deployed on an EHR workstation in the MICU. Participants included interns, residents (predominantly internal medicine trainees), as well as pulmonary, medical and anesthesia critical care fellows (of all years of training). All participants had received institution-specific training with our EHR and had already been users of the system prior to testing. This training was standard for all residents and fellows at the beginning of their training and comprised of 1.5 days of small group instruction with one of the institution-dedicated EHR trainers. Training involved hands on use with the system and included tasks such as data retrieval, data entry and instructions on customisation. Users were expected to complete a set number of tasks in each of these areas prior to completion. Each participant was provided a one-page description of the patient, including a brief synopsis of the history and a current physical examination for the context. Participants were told to analyse patient data in order to prepare to ‘sign-out’ the patient to a colleague, including any management changes they would recommend making to the patient's care. Participants were blinded to the presence of known errors built in the case. Each participant used their own login credentials, which allowed their own personal EHR customisations to be activated within the EHR, and was allotted 10 min of chart review time which represents the approximate amount of time the average resident spends reviewing the chart while prerounding on an individual patient at our institution. Of note, we initially tested the case with two senior critical care fellows to ensure both its realism (in terms of data presentation) and feasibility of completion in the allotted time.

During the exercise, participants were directly observed by a member of the study team and all data recorded on a standardised data collection sheet. The observer noted both the absolute number of screens used in reviewing the patient record as well as the use of either of two ‘high-yield’ screens. One of these screens (the ‘MD Index’ screen) was a gateway into multiple different modes of data presentation, while the other (the ‘Synopsis’ screen) presented a graphical view of vital sign trends alongside timed MAR and lab data. Of note, while all of our primary data and portal screens were designed to be used within the ICU environment, none are specific to the ICU and they are utilised throughout the inpatient environment.

Each participant made a brief presentation to a member of the study team with specific focus on action items that should be addressed. The presentation was structured to mimic the workflow on daily rounds. Participants were scored based on whether they identified the action items/clinical trends within the case. Upon the conclusion of the encounter, all participants were given immediate feedback on which issues were correct, which were missed, and where to find the missing data in the EHR.

Differences between groups were analysed using a two-tailed student t test. Correlations were analysed via Spearman's test. (For both, a p value <0.05 was considered significant.) All data were analysed with GraphPad Prism (San Diego, California, USA).

Results

A total of 38 participants were tested: 19 fellows, 10 residents and 9 interns. Of the 14 possible medical issues requiring recognition and alteration in management, an average of 41% (range 6–73%) were identified (figure 1). The recognition rate increased significantly with the level of clinical training: interns, residents and fellows recognised 35%, 41% and 50%, respectively (p=0.03; figure 1).

Figure 1

Simulation performance is loosely correlated with the level of training. Thirty-nine participants underwent EHR simulation and graded according to the number of correctly identified errors. Data analysed by analysis of variance.

Overall, there was little consistency in the type of errors missed across the cohort as a whole. The least recognised issues were the over-sedation of the patient, and the lack of daily awakenings (16%), the latter of which was indicated by a Motor Activity Assessment Scale (MAAS) score varying between zero (unresponsive to noxious stimuli) and one (responsive only to noxious stimuli).33 Poor glycemic control was identified but at a relatively low rate (68%; figure 2). Of greater concern, only 29% correctly recognised the change in vital signs consistent with recurrent sepsis.

Figure 2

Frequency of error recognition. The number of participants correctly identifying each of the 14 main errors built into the simulation.

Of note, during the first round of testing, we inadvertently introduced an additional error into the laboratory screen when we built the simulated case. The patient, instead of having 20% band forms in their manual differential, had 20% basophils. Only 1 of the 14 people noted this abnormality, providing additional evidence for the potential for the simulation to assess juxtaposition errors as well the extent to which they exist. Finally, except for recognition of an excessive tidal volume (>6 cc/kg) (58% vs 21%; p=0.045) and lack of daily awakenings (53% vs 16%; p=0.038), two best practices for intubated patients with ARDS,34 ,35 there were no statistical differences between fellows and residents in recognition of other errors or safety issues (figure 3). Overall, the average participant visited 16.4 different screens (an average of 35.6 s per screen). The number of individual screens visited correlated with the number of errors recognised (figure 4).

Figure 3

Successful error recognition is mostly independent of the training level. Overall recognition rate by fellows (blue) and residents (red) for each of the 14 major errors. Data analysed by t test.

Figure 4

Increased screen utilisation is associated with improved performance. The number of independent screens visited was correlated with the overall performance on simulation.

We also looked at whether viewing ‘high impact’ data screens impacted the ability of participants to find errors. We looked specifically at the two main portal pages within our EHR. One was the ‘Synopsis’ page that presents haemodynamics in a graphical format as well as all medications and lab values. The other was the ‘MD Index,’ a portal, created by our institution as part of its customisation of the EHR, which allows easy access to a number of different data screens, including vitals, MAR, hemodynamics. We found that use of the Synopsis screen was associated with lower performance on the simulation. Conversely, use of the MD Index was associated with a significantly better use of the system (figure 5).

Figure 5

Individual screen use correlates with performance. The overall success rate was tabulated for user of two of the major portals; screens A and B. Overall, use of screen A was associated with increased error recognition while screen B use was associated with poor performance. Data analysed via t test.

Discussion

In this pilot study, we developed and used a novel ICU-specific EHR simulation based on a commonly used commercial system. There is an increasing trend to use simulation as a tool for assessing end-user competency and improving patient safety. A high-fidelity simulation allows our study to be conducted in an authentic and realistic clinical environment, with the opportunity to provide the participant with immediate feedback at the conclusion of the simulation. Since end-users often customise their ‘user interface’ quite significantly, we felt that it was important to create a simulation environment for our participants that was identical to the actual production EHR environment, including the log-in and key clinical screens, and maintain any customisation that end-users had already developed. Second, the simulation is performed in the ICU on existing clinical workstations further enhancing environmental fidelity. Third, the case is based on an actual ICU patient and data were representative of a typical high-complexity ICU patient in terms of the quality, the amount of data within the patient chart (including the fact that this was a 5-day ICU stay) and the types of errors and safety issues typically encountered in our ICU. Fourth is our method of assessment. By having participants present the patient to an ICU physician (either attending or senior fellow), we created an environment consistent with our existing workflow (as opposed to answering specific questions on a written examination, using surveys to elicit information, or recounting the simulation after the passage of much time). Finally, the timed nature of the exercise was much more consistent with the real workflow in an ICU where physicians only have a limited time to search for data and was consistent with the existing workflow within our ICU.

Our findings were both surprising and concerning. First, only 41.5% of errors were recognised, and while fellows performed statistically significantly better than interns or residents, their overall performance was still below what most would consider acceptable (47%). Further, the most severe errors, such as development of impending shock, were recognised at even a lower frequency (40%). We observed the overall poor performance among the members of all levels of training, despite all of the participants having received general training with our EHR and over a years’ use with the system. Given this finding, it appears that a major stumbling block is the physician interface with the EHR as opposed to a pure knowledge deficit. However, these observations appear to be in-line with the reported literature. Nearly 89% of physicians believe the diagnosis of sepsis is missed in the inpatient setting.31 In patients with ARDS, as in this case, nearly 70% of patients are still not managed with appropriate ventilator strategies.34 Medication errors, including inappropriate dosing due to changing renal function, account for nearly 78% of the total reported errors in the ICU. Finally, nearly 40% of ICU patients are oversedated without acknowledgement of their sedation score.35 ,36 Finally, amongst patients who have in-hospital cardiac arrest or need for ICU admission, nearly 60% have evidence of clinical decompensation prior to transfer and in one study, medical staff were only aware of all of the physiologic abnormalities in 34% of patients.37 ,38

Our findings are consistent with the description of others detailing the ICU as a vulnerable environment for the EHR. For example, Han et al documented an increased mortality with the introduction of computerised provider order entry (CPOE) into their Neonatal ICU.21 This was believed not to be due to the system itself, but rather due to poor implementation of the system, lack of customisation, poor workflow and overall poor education and training on how to manage the physician–EHR interface. This assessment was supported by a subsequent study documenting improved outcomes with implementation of an identical system in a similar style ICU.39 ,40 A similar experience was observed at another institution, where an enterprise-wide EHR implementation of their EHR proved to be successful, with the exception of the MICU.20 MICU-specific problems were attributed to poor training, inadequacies in the EHR–physician interface and lack of customisation creating unmanageable workflow issues, and the system was taken off-line within 6 months. Only after improved customisation, increasing the number of available computers and improved training and education, were they able to safely re-introduce the system into their ICU.20

While the concept of patient-based simulation in general is not new, our study is one of the first to use robust, high-fidelity simulation to objectively assess successful use of the EHR and to specifically target identification of changes in the clinical status as the primary endpoint. When EHRs have been utilised in simulation training, it has often been used with non-physicians such as pharmacy or PA students, or rather included in a broader simulation exercise where little emphasis was placed on the interface with the EHR itself.27 ,41 Interestingly, a recent set of studies from one group has used a combination of simulated cases and video analysis to assist in EHR design.42 However, these studies focused on CPOE (as opposed to the other functions of EHRs including data retrieval) and no data were provided as to the fidelity of the simulation or the clinical context of the actual cases.

Within the ICU, two studies have specifically addressed the use of EHR simulation. In one, physicians were tested about their decision-making with regard to end-of-life care in a virtual patient admitted to the ICU with metastatic cancer and septic shock. In this scenario, the EHR was utilised as a tool for disseminating the case-based information while efficient and appropriate use of the EHR was not assessed.29 In the second, researchers hypothesised that the user interface to their existing EHR decreased efficiency with the system and impaired data finding and increased cognitive errors. They had 20 providers review a case in both their original EHR and one with a new front-end to improve data finding, with participants answering eight specific questions specifically related to the management of a bleeding patient.22 The new EHR significantly reduced the number of incorrect answers to the questions overall, although for one question focusing on medications, errors increased. This study did have several limitations, including the failure to use a high-fidelity environment (use of a testing room), failure to test efficiency with the system (no apparent time limit), a very directed set of questions to answer to assess data finding (as opposed to the more fluid unknown situation of the average ICU patient) and failure to test longitudinal evaluation of data past 24 h.

The results of our pilot study significantly expand upon these prior studies and will allow us to design a more robust educational and quality improvement initiative around EHR simulation. First, we now have a blueprint for the creation of additional cases, a prerequisite to determine the impact of participation in the simulation. Second, we have established baseline error recognition rates for users at all levels of training and experience, thus allowing us to adequately determine the sample size required for additional studies. For example, based on the data from cardiac arrest simulation, we can expect that participation in this exercise results in a nearly 20% improvement in error recognition on repeat testing,43 thus requiring at least 10 participants at each level of training to undergo repeat testing with additional cases to establish this hypothesis. Finally, by establishing baseline usability data and simulation infrastructure, we now have the ability to also test the effect of alterations in the EHR user interface on error recognition and overall performance.

It is important to acknowledge several limitations of our study. First, we only tested data retrieval in this part of the simulation. We recognise that the EHR affects multiple aspects of delivery of care, including communication and order entry. However, the process of data retrieval, process and recognition is the foundation for effective communication and order entry and thus we felt a logical place to begin. We plan to expand this simulation to address these aspects of the EHR in the future. Second the nature and the number of errors built in the case. We have discovered, through the incorporation of the EHR into our weekly Morbidity and Mortality conference, that clinical deterioration in patients is often heralded by numerous clinical clues and is often caused by a number of small errors within an individual case both cognitive and system related.44 It is not uncommon for a patient with nosocomial clinical deterioration, as in this case, to have this number of issues that need to be identified. However, we also acknowledge that care of the average ICU patient involves an interprofessional team of pharmacists, nurses and respiratory therapists. As a result, until our simulation is disseminated to all members of the team simultaneously, we cannot be certain that every missed issue by the physician will not be caught by other members of the team and thus result in direct patient harm. Further, it should be stressed that the goal of the simulation is to test the system under high-stress/dangerous situations. We believe this is not only a unique aspect in our study, but is essential to ensure that the system works optimally under all clinical situations. Third, we acknowledge that the case created is unique to the ICU environment. However, we believe with appropriate case creation, the same type of simulation can be used successfully in any clinical care environment. Fourth, while the case itself was realistic in terms of data presentation and the testing was performed in situ, participants were still aware that this was a simulated case. As a result, there could still exist a significant Hawthorne effect resulting in an overestimation of the error recognition rate. Finally, the studies were performed utilising one specific EHR (EPIC Care). While the most commonly used EHR by US physicians, we acknowledge that each EHR and user interface will have its own strengths and weaknesses in terms of data recognition or processing.45 However, our methods using robust and realistic cases will allow other researchers to test the functionality of any other EHR.

In conclusion, implementation of EHRs has brought a massive amount of information to the fingertips of ICU practitioners across the country. This study demonstrates that the combination of sheer data and provider knowledge is not sufficient for quality patient care: utilisation of the EHR is a skill that must be learned. There is much room for improvement both in the interface itself and how we teach its use. Through the creation of standardised cases for EHR simulation, we now have the infrastructure to improve user education as well as objectively test the efficacy of both new educational techniques and EHR redesign.

References

Footnotes

  • Contributors CAM helped design the protocol and conducted the simulation experiments. DS conducted the simulation experiments and helped with data analysis. JAG designed the study, performed the simulations and is primarily responsible for data analysis and is the guarantor. GS was responsible for technical aspects of the design of the simulation environment. VM and WRM were responsible for both study design and data analysis. All authors have read and approved the final manuscript.

  • Funding NIH (grant number 1U24OC000015) and AHRQ (grant number R18 HS 021637-02).

  • Competing interests None.

  • Ethics approval OHSU Institutional Review Board.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data sharing statement No additional data are available.