Article Text

Download PDFPDF

An electronic trigger based on care escalation to identify preventable adverse events in hospitalised patients
  1. Viraj Bhise1,2,
  2. Dean F Sittig3,
  3. Viralkumar Vaghani1,2,
  4. Li Wei1,2,
  5. Jessica Baldwin1,2,
  6. Hardeep Singh1,2
  1. 1 Center for Innovations in Quality, Effectiveness, and Safety (IQuESt), Michael E DeBakey Veterans Affairs Medical Center, Houston, Texas, USA
  2. 2 Department of Medicine, Baylor College of Medicine, Houston, Texas, USA
  3. 3 School of Biomedical Informatics, University of Texas Health Science Center, Houston, Texas, USA
  1. Correspondence to Dr Hardeep Singh, Department of Medicine, Baylor College of Medicine, 2002 Holcombe Blvd, 152, Houston, Texas 77030, USA; hardeeps{at}bcm.edu

Abstract

Background Methods to identify preventable adverse events typically have low yield and efficiency. We refined the methods of Institute of Healthcare Improvement’s Global Trigger Tool (GTT) application and leveraged electronic health record (EHR) data to improve detection of preventable adverse events, including diagnostic errors.

Methods We queried the EHR data repository of a large health system to identify an ‘index hospitalization’ associated with care escalation (defined as transfer to the intensive care unit (ICU) or initiation of rapid response team (RRT) within 15 days of admission) between March 2010 and August 2015. To enrich the record review sample with unexpected events, we used EHR clinical data to modify the GTT algorithm and limited eligible patients to those at lower risk for care escalation based on younger age and presence of minimal comorbid conditions. We modified the GTT review methodology; two physicians independently reviewed eligible ‘e-trigger’ positive records to identify preventable diagnostic and care management events.

Results Of 88 428 hospitalisations, 887 were associated with care escalation (712 ICU transfers and 175 RRTs), of which 92 were flagged as trigger-positive and reviewed. Preventable adverse events were detected in 41 cases, yielding a trigger positive predictive value of 44.6% (reviewer agreement 79.35%; Cohen’s kappa 0.573). We identified 7 (7.6%) diagnostic errors and 34 (37.0%) care management-related events: 24 (26.1%) adverse drug events, 4 (4.3%) patient falls, 4 (4.3%) procedure-related complications and 2 (2.2%) hospital-associated infections. In most events (73.1%), there was potential for temporary harm.

Conclusion We developed an approach using an EHR data-based trigger and modified review process to efficiently identify hospitalised patients with preventable adverse events, including diagnostic errors. Such e-triggers can help overcome limitations of currently available methods to detect preventable harm in hospitalised patients.

  • diagnostic errors
  • triggers
  • adverse events
  • escalation of care
  • ICU
  • rapid response
  • patient safety

This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Background

Measuring adverse events accurately is foundational for patient safety improvement efforts, but all existing measurement tools have limitations.1 Many hospitals use trigger tools such as the Institute of Healthcare Improvement’s (IHI) Global Trigger Tool (GTT) to monitor adverse events and patient harm in inpatient settings.2–4 Reviewers of GTTs are explicitly instructed to not make judgements about preventability during the review process.2 5 A recent systematic review by Hibbert et al 5 suggests that rather than being used primarily for counting adverse events, application of GTT should be reframed as an opportunity to understand events and to determine the most frequent event type for purposes of quality improvement. The review also recommends the need for using preventability scores for setting local priorities and for including ‘omission’ adverse events.5 For example, similar to many other measurement methods, current applications of GTT usually are unable to find ‘omission’ events related to diagnostic errors.6–8

Because trigger tools help identify an at-risk patient cohort that needs confirmatory reviews to determine adverse events, the yield of the trigger and the efficiency of the application processes are important considerations for anyone using them. In the recent Hibbert et al review, the yield of GTT was found to vary between 7% and 40%.5 Previous application has involved a manual review of a large number of patient charts to look for the presence of triggers, followed by a detailed review among triggered records to identify adverse events.2–4 9–11 Conversely, newly available clinical data from electronic health records (EHRs) provide a unique opportunity to select which records to review.12 13 Methods to focus and optimise current trigger tools and improve the efficiency and yield of detecting adverse events would increase the percentage of reviewed medical records where an adverse event was identified, lowering the burden of record reviews at the same time.

Our study objective was to refine the methods of GTT application and leverage EHR data to improve detection of preventable adverse events, including diagnostic errors. More efficient methods to measure preventable events could lead to focused learning and quality improvement efforts, help facilitate analysis to understand contributory factors for these events, and help inform interventions for improvement.14

Methods

We queried the EHR data repository at a large health system to identify an ‘index hospitalization’ associated with escalation of care (defined as transfer to the medical intensive care unit (ICU) or initiation of rapid response team (RRT) within 15 days of admission) between March 2010 and August 2015. We used expert input to identify automated inclusion and exclusion criteria that could be used to enrich the triggered cohort such that on review we might be more likely to find a preventable adverse event. We then used an iterative process using expert input to finalise our trigger by conducting pilot chart reviews of triggered records. We focused on patients at lower risk for escalation of care during hospitalisation based on two criteria: (1) age 65 years or younger when admitted to an adult inpatient service, and (2) presence of minimal comorbid conditions (Charlson Comorbidity Index15 <2). For such patients, escalation of care if it occurred would more likely be unexpected and more likely to be preventable. To further increase the yield of preventable events, we electronically excluded patients transferred for postprocedure care (eg, after surgery and procedures like percutaneous coronary intervention), who were frequently admitted (three or more prior hospitalisations in the past year), or who were transferred to a hospice or palliative care within a 6-month time-period prior to the index hospitalisation. Using the automated inclusion and exclusion criteria above facilitated the use of electronic data to refine the GTT algorithm and led to development of an ‘e-trigger’.

We also refined the GTT review methodology. We only reviewed charts that were identified by the e-trigger. After undergoing training, two physicians independently reviewed all eligible records to identify events related to errors in diagnostic assessment and care management. Both physician reviewers were experienced in patient safety-related electronic medical record reviews and received additional training for this study. They were asked to spend 20 minutes or less per chart to ensure broader practical application of review techniques in the future. Reviewers used the Safer-Dx instrument for assessment of diagnostic errors and for collecting information about process breakdowns using the five diagnostic process dimensions (patient factors, patient–provider encounter, test performance and interpretation, test follow-up and consultations).16 17 To capture care management events, we identified adverse drug events, healthcare-acquired infections, post-operative complications, fall-related injuries and other adverse events. Potential harm was captured using the AHRQ Common Format Harm Scale V.1.2.18 Disagreements among reviewers were discussed and resolved by consensus prior to analysis.

Results

Of 88 428 hospitalisations during the study period, 887 were associated with escalation of care (712 ICU transfers and 175 RRTs). Of these 887 index hospitalisations, 92 (10.4%) involved unique patients in a low-risk cohort who encountered an unexpected escalation of care. The positive predictive value (PPV) for detecting any preventable adverse event in this cohort was 44.6% (41 of 92), with reviewer agreement of 79.35% (Cohen’s kappa 0.573, CI 0.409 to 0.747).

We detected 7 (7.6%) diagnostic errors and 34 (37.0%) care management-related preventable adverse events: 24 (26.1%) adverse drug events, 4 (4.3%) patient falls, 4 (4.3%) procedure-related complications and 2 (2.2%) hospital-associated infections. Diagnostic errors included missed diagnoses of deep vein thrombosis, haemothorax, sepsis and alcohol withdrawal (examples in table 1). Errors occurred from breakdowns in the patient–provider encounter (ie, history, exam, test ordering; n=6, 85.7%), including failures in information gathering and interpretation (eg, history of alcohol use was missed, leg pain in an immobilised patient was not evaluated during patient assessment) and delays in test follow-up and tracking (eg, chest X-ray ordered but abnormal finding missed). In most of the events (73.1%; 30 of 41), there was potential for temporary harm. Also in all seven cases of diagnostic error, there was potential for serious harm.

Table 1

Examples of diagnostic errors and other adverse events identified in the study

Discussion

We developed a new approach, based on an e-trigger and modified review methods, to identify patients with preventable adverse events in inpatient settings. The approach leveraged EHR data and used a modified GTT algorithm and chart review methodology to increase the yield for preventable events. We were also able to identify inpatient diagnostic errors, which is a limitation of other currently available tools. Modified e-triggers that use increasingly available clinical data through EHRs could improve identification of preventable adverse events in hospitals and set a stronger foundation for quality improvement and learning efforts.14 Relatively more efficient measurement methods could lead to better understanding of contributory factors for these events, and help inform interventions for improvement.

The 44.6% PPV of the escalation e-trigger was achieved relatively more efficiently in relation to prior comparable efforts. In our study, of the 88 428 hospitalisations observed, EHR data helped us identify the 887 associated with escalation of care and then helped further select just the 92 care escalations (ie, 0.1% of all hospitalisations) that were unexpected because of their low a priori risk. Thus, EHR data greatly helped increase the yield and efficiency, identifying just the 10% of ‘enriched’ patient care escalations (92 of 887) we needed to review, of which more than two-fifths were found to contain an error. The e-trigger thus compares favourably with ‘unenriched’ random review methodologies. The refinement illustrates how organisations can leverage their EHR data to detect and focus on preventable adverse events including diagnostic errors using a lens of learning and quality improvement. Because manual record reviews are resource-intensive, they should be reserved for records that are highly likely to reveal learning opportunities. Future use of ‘free-text’ data using natural language processing could potentially improve the yield and efficiency further by making the reasoning behind specific patient transfers clearer. With additional development and evaluation, a portfolio of EHR-enhanced ‘smart’ e-triggers could help hospitals improve the efficiency of their current patient safety monitoring activities.

The methods proposed herein advance prior scientific knowledge on application and use of GTTs. Table 2 compares our findings with those from previous studies using both GTT and escalation of care triggers.9–11 19–22 Only a few of these studies focused on preventable adverse events and errors as learning opportunities.5 9 11 19 21 Overall, the PPV in this study compares well to prior work, being superior to two other large non-surgical studies.9 11 Our PPV for preventable events for care escalation is slightly less than in the Naessens et al’s study,20 where the study investigators used a substantially different manual review methodology involving random reviews of completed charts to identify any of the 55 IHI triggers. Rather than random reviews, the investigators themselves recommend a more focused review of records known to contain triggers with higher yields to get better insight into problems with care delivery. They also recommend developing automated techniques to identify triggers followed by record review to allow focus on contributory causes of events, rather than just identifying events. These recommendations are consistent with our enhancements. We were also able to improve on previous studies that have used initiation of RRT as a trigger to identify preventable adverse events.19 21 Thus, our study methods and focus on preventable events and diagnostic errors advance the body of knowledge on use of trigger methods for hospitalised patients.

Table 2

Comparison of findings from a sample of prior GTT studies with escalation e-trigger

The escalation e-trigger selected events that were more likely to be unexpected than originally proposed in GTT and potentially more likely to be associated with error. To focus on preventable adverse events, healthcare institutions could use similar strategies to refine and improve efficiency of trigger tools. An iterative chart review process under expert guidance could help in further refinement and customisation. Nevertheless, we note that introducing selective review and sample enrichment is more useful for quality improvement, learning and research purposes and may not be used for calculating event rates.

We see several advantages of using an enriched patient sample. For example, a health system using this trigger (and similar future triggers) will find that the reviewers will need to review a much smaller number of records to find the few that need to be analysed in detail for learning and improvement. This could bolster patient safety improvement efforts in health systems with constrained resources and competing demands on quality measurement. Contributory factors uncovered through a more detailed postreview safety analysis could provide the impetus for solutions, including non-punitive feedback to the front-line care team. Because very few methods focus on inpatient diagnostic errors, future efforts using similar triggers could be useful to identify and understand contributory factors associated with diagnostic adverse events in inpatient settings.6–8 While this trigger cannot be used for estimating frequency, a combination of various types of electronic triggers could be refined and tested and if found useful can be used to calculate frequency of inpatient diagnostic errors, a number that remains elusive and yet to be defined in US hospitals.23

Several limitations merit discussion. Our study was performed at one site and our findings might not be necessarily generalisable to others. However, the trigger uses a common query language and relatively standard criteria (ICD-9 codes and event-specific codes for ICU transfer, RRT and hospice) that could be replicated easily. We were unable to report sensitivity and specificity of the trigger tool or calculate the prevalence of preventable adverse events, due to inability to perform a larger number of additional record reviews necessary to find false-negative cases or calculate prevalence estimates. However, this refinement is the first step towards additional development and application. Determination of preventability is subject to reviewer judgement,24 but we took measures to make record reviews more objective. Also, as in most other retrospective evaluations of adverse events, we cannot rule out hindsight bias.

In conclusion, we developed an EHR data-based trigger and modified review processes to efficiently identify hospitalised patients with preventable adverse events, including diagnostic errors. Such e-triggers can help overcome limitations of currently available methods and inform the future development of robust measurement systems to detect and prevent harm from diagnostic errors and adverse events in hospitalised settings.

References

Footnotes

  • Twitter @HardeepSinghMD

  • Contributors Study concept and design: VB, HS, DFS. Acquisition of data: VB, LW. Statistical analysis: VB. Analysis and interpretation of data: VB, HS. Drafting of the manuscript: VB. Critical revision of the manuscript for important intellectual content: VB, DFS, VV, LW, JLB, HS. Administrative, technical or material support: VB, DFS, VV, LW, JLB, HS. Study supervision: VB, DFS, HS.

  • Funding Dr. Singh is supported by the VA Health Services Research and Development Service (CRE 12-033; Presidential Early Career Award for Scientists and Engineers USA 14-274), the VA National Center for Patient Safety, the Agency for Healthcare Research and Quality (R01HS022087 and R21HS023602), and the Houston VA HSR&D Center for Innovationsin Quality, Effectiveness and Safety (CIN 13-413).

  • Competing interests None declared.

  • Ethics approval Baylor College of Medicine IRB.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data sharing statement Sensitive data not available to be shared.