Background: Most studies of healthcare complications identify surgery as a major contributor to the overall burden of complicated care that leads to injury or death. Indeed, surgical adverse events account for one-half to three-quarters of all adverse events in these studies. Despite the intensive current focus on improving medical quality and safety, only a minority of quality improvement efforts are focused on surgery. This study reports on the development and testing of a Trigger Tool to detect adverse events among patients undergoing inpatient surgery.
Methods: Rather than relying on traditional voluntary reporting for safety outcome measures such as incident reports, surgical peer review, or morbidity and mortality conferences, the Institute for Healthcare Improvement (IHI) has employed a new method for the detection of surgical adverse events (SAEs). This approach, commonly referred to as the “Trigger Tool”, identifies adverse events using a form of retrospective record review that has been developed and implemented in many areas of care.
Results: During a 12-month IHI Perioperative Safety Collaborative, 11 hospitals voluntarily submitted data from surgical inpatient record reviews. In 854 patients, 138 SAEs were detected in 125 records for a rate of 16 SAEs per 100 patients or 14.6% of patients; 61 (44%) of these events contributed to increased length of stay or readmission and 12 (8.7%) events required life-saving intervention or resulted in permanent harm or death. Hospital review teams reported verbally that most of the events identified during the Trigger Tool review process had not been detected or reported via any other existing mechanism.
Conclusions: The IHI Surgical Trigger Tool may offer a practical, easy-to-use approach to detecting safety problems in patients undergoing surgery; it can be the basis not only for estimating the frequency of adverse events in an organisation, but also determining the impact of interventions that focus on reducing adverse events in surgical patients.
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
The Institute of Medicine’s landmark report on safety, To err is human,1 went beyond many earlier studies of medical care quality2–8 by focusing on the inherent and considerable risks of being cared for in a complex and technically sophisticated, yet highly fragmented, healthcare system. The report identified surgery as a major contributor to the overall burden of complicated care that leads to injury or death. Indeed, surgical adverse events account for one-half to three-quarters of all adverse events in this and other studies.910 Despite these concerns, much less is known about the epidemiology, basic science and systems-related issues of surgical safety than, for example, medication safety.11 Only a minority of the active clinical quality improvement initiatives have focused on surgery. Among the few programmes that do address concerns about quality in relation to surgery is the Surgical Care Improvement Project (SCIP), which was launched 3 years ago by the Centers for Medicare and Medicaid Services (CMS) in conjunction with other national partners.12 A major focus of SCIP has been to enhance the effective adoption of commonly accepted best practices such as antibiotic prophylaxis before surgery.
Although SCIP is firmly based in quality improvement theory and uses widely recognised process measures, only a few surgical quality improvement initiatives such as the National Surgical Quality Improvement Project (NSQIP), for example, have emphasised safety outcome measures. Since 2002, the Institute for Healthcare Improvement (IHI) has organised several surgical improvement collaboratives, which focus not only on traditional process measures of quality but also on safety outcomes.13 Rather than relying on traditional voluntary reporting for safety outcome measures, such as incident reports, surgical peer review, or morbidity and mortality conferences, the IHI has, within a Perioperative Safety Collaborative, employed a new method for the detection of surgical adverse events (SAEs). This approach, commonly referred to as the “Trigger Tool”, identifies adverse events using a technique that has been successfully implemented in many areas of care, including medication safety, intensive care, unit (ICU) safety, perinatal safety and ambulatory safety.14–17
This study reports on the development and testing of a Trigger Tool to detect adverse events among patients undergoing inpatient surgery.
The Trigger Tool methodology is designed for retrospective review of a random sample of closed (abstraction completed) patient records using a list of “triggers”—items in the record that serve as clues to a possible adverse event, defined as unintended physical injury from medical care. Reviewers are not expected to read the record from front to back; they are instructed to look solely for triggers, and to spend no more than 20 min per record. The presence of a trigger (“positive trigger”) does not necessarily indicate that an SAE has occurred; rather, a reviewer’s discovery of an SAE “triggers” a check on other portions of the record to determine whether an SAE has occurred. For example, transfusion of blood products is a positive trigger. If blood loss during or following an operative procedure was within expected limits, then this trigger has not resulted in identification of an adverse event; in contrast, the documentation of extensive intraoperative or postoperative bleeding or an unexpected number of transfusions means that an adverse event has occurred. Some triggers are themselves also adverse events—for example, postoperative pulmonary embolism. The distinguishing characteristic of an adverse event, according to this methodology, is that it is an unintended consequence of the medical care the patient received, not part of the natural progression of disease. All adverse events meeting this description that are discovered during review are counted, regardless of whether specific triggers led to their detection.
We consulted with a group of expert faculty, including surgeons, anaesthetists and others experienced with quality improvement and Trigger Tools, about the adverse events in surgical patients which could be detected through a retrospective review using triggers. Based on published research and their recommendations, we created an initial list of 23 triggers (with one addition for “other”; table 1).
Once an SAE has been identified, the next step is to assign it to a level of harm, using an adaptation of the National Coordinating Council for Medication Error Reporting and Prevention (NCC MERP) Index for categorising errors.15 The full NCC MERP Index includes nine categories, labelled A–I. Although the index was originally developed for categorising medication errors, we hypothesised that these categories would be applicable to SAEs. As with prior Trigger Tools,15–17 the IHI Surgical Trigger Tool was designed to count all discovered adverse events—that is, harm to the patient, whether or not the result of an error. Harm was defined as unintended “temporary or permanent impairment of physical or physiological body function or structure”. Accordingly, the tool excluded categories A–D from the NCC MERP Index, because these categories describe errors that do not cause harm as listed in table 2.
Initial testing: summer 2003
Initial testing was conducted in five hospitals that had experience with prior IHI Trigger Tools; they were invited to test the Surgical Trigger Tool in the summer of 2003. Each hospital was provided with a list of the triggers, descriptions of each trigger and the types of adverse event that might be identified. Staff were instructed to select a random sample of 20 inpatient surgical records from one calendar month for review by their personnel; they were not asked to provide data on the surgical specialties or populations represented at their hospital. The de-identified data submitted by the five hospitals to the IHI included: (1) the number of times each trigger was found, (2) whether a trigger identified an adverse event, and, if so, (3) the type of adverse event and (4) the harm category. The IHI conducted a conference call with representatives from all five hospitals and members of the expert faculty to discuss the findings and obtain feedback about the tool. Hospital representatives reported that it was easy to use; in some cases they found the resulting data surprising, since they revealed adverse events that had not been reported through other mechanisms, such as voluntary reports or peer review.
Based on feedback and data from the initial testing, one trigger was removed from the IHI Surgical Trigger Tool; “Body mass index >28” (T4) was found to be a frequently occurring trigger, but not sensitive for identification of adverse events.
IHI Collaborative on Perioperative Safety: October 2003–October 2004
The resulting IHI Surgical Trigger Tool was then used in an IHI Breakthrough Series Collaborative on Perioperative Safety conducted from October 2003 to October 2004. Participating hospitals were required to use the tool to collect monthly data. Teams from 31 hospitals participated during the collaborative. Some of these teams had previous experience with trigger methodologies; teams new to the methodology were trained by IHI faculty.
The collaborative teams at each hospital generally consisted of three to five people; teams included surgical nurses, surgeons, anaesthetists, quality improvement staff and others, in various combinations. These teams were trained in the use of the Trigger Tool at a Collaborative Learning Session, during which they also received suggestions on methods for selecting random samples of 20 inpatient surgical records from each month for review. Compliance with sampling strategies was not monitored during the project. The pilot populations varied for each team depending on their area of focus; teams selected pilots such as orthopaedic surgery or general surgery or other specialties, depending on the surgical volumes at their hospitals. Data submitted did not include identification of the type of surgical procedure for each patient since no attempt was made to compare the hospitals or types of surgical procedure. All teams were expected to report the following two measures in their monthly project reports: (1) the percentage of patients with an adverse event (number of patients in the sample with an adverse event divided by the total number of patients in sample); and (2) the perioperative harm rate (number of adverse events found divided by the total number of patients in sample).
Further, the IHI asked teams (but did not require them) to provide the following de-identified patient-level data for each patient, for further analysis of the tool’s characteristics: (1) triggers identified, (2) adverse events identified, (3) harm level and (4) description for any adverse events. The hospital team conducted reviews and determined whether an adverse event had occurred and, if so, the harm level. The IHI faculty strongly recommended that a doctor at each hospital review all adverse events identified to confirm that an adverse event had actually occurred (rather than progression of a disease process) and to confirm the assigned level of harm; compliance with this recommendation was not monitored, however.
Teams submitted their de-identified patient-level data to the director of the project (FG). Each hospital was assigned an identification number that was known only to the project director. The director then entered data into a spreadsheet by hospital identification number. The director reviewed each adverse event reported to ensure that it included a clear description of the adverse event and a level of harm from the adapted NCC MERP categories. If the description did not contain sufficient detail to describe the adverse event and the assigned level of harm clearly, the director contacted the submitting hospital for clarification and additional detail. If the director was unable to obtain or clarify this information, they excluded the adverse event from the data. Remaining data were aggregated in a separate file without hospital identifiers.
Initial testing: results
From the 100 patient records reviewed, 63 positive triggers (excluding BMI) were found in 38 records. Thirty-eight occurrences of the BMI trigger were found with only two adverse events detected; this indicated that BMI was not a sensitive trigger for identifying adverse events, and it was therefore subsequently removed from the list. Twenty-one adverse events were identified in 19 patient records or a rate of 21 SAEs per 100 patients (see table 3). The rate of SAEs per 100 patients ranged from 5/100 to 45/100 and the percentage of patients with an adverse event ranged from 5% to 35% across the five hospitals.
Hospitals assigned harm categories to each adverse event they detected. Eleven of the events were classified as temporary harm (E), five as temporary harm causing initial or prolonged hospitalisation (F), one as permanent harm (G), three as requiring life-saving intervention (H) and one patient death (I) (see table 4).
Adverse events were of various types and included items that might usually be labelled as “complications”; in the IHI Surgical Trigger Tool, all postoperative surgical complications are considered to be adverse events, as they are unintended consequences of medical care (see table 5).
IHI Collaborative on Perioperative Safety: results
During the 12-month IHI Collaborative on Perioperative Safety, 11 hospitals voluntarily provided de-identified patient-level data. Each hospital submitting data reviewed 20 patient records per month; the mean number of months reported was 4, ranging from 1 to 8.
Aggregate data were reviewed by the IHI project director and a physician faculty member (authors of this paper) to ensure that reported adverse events contained sufficient information to be included and that the assigned level of harm was appropriate. Based on this review, the IHI reviewers removed 25 adverse events reported by hospitals from the aggregate data. In most cases (19) these were category E events that the IHI reviewers judged from the documentation to be positive triggers, but not adverse events. For example, several were postoperative transfusions with no documentation submitted to indicate large numbers of transfused units, or bleeding beyond the expected amount. In a few other cases, multiple adverse events were reported for one patient, but descriptions of events indicated that only one adverse event had occurred that manifested itself in several ways, so the IHI project director and physician faculty member considered these to be one event.
A category F adverse event was reported by a hospital team and the description indicated urinary tract infection and confusion; the IHI reviewers split this into two events: the infection as category F and the confusion as category E. The IHI reviewers changed categories of harm for three events: changed two cases of overnight stay due to postoperative nausea from E to F; changed one readmission for congestive heart failure (CHF) exacerbation from E to F; and changed one case where a patient returned to the operating room for an ischaemic colon and had a colostomy from F to G. A limitation of the review was that the director and physician faculty member did not have the actual patient records for review and had to rely on descriptions submitted by hospital teams.
Final data revealed 138 total adverse events in 125 patient records, which represented a rate of 16 SAEs per 100 patients or 14.6% of all the patient records reviewed (854 records), as noted in table 6.
The categories of harm indicated that 44% (61/138) of the harms were category F (initial or prolonged hospitalisation), which was a significantly higher percentage than had been seen in data using other IHI Trigger Tools. Of these, 11/61 (18%) were infections, 10 (16%) were gastrointestinal (including vomiting, ileus) and 8 (13%) were pulmonary. The events in categories G, H and I cumulatively accounted for 8.7% of all adverse events (fig 1).
Events were further classified by the project director and physician faculty member by type using the categories listed in table 7. The top two event types identified were infections (17) and cardiac (17), followed by pulmonary (15).
This study evaluated a practical and easy-to-use method to improve detection of SAEs, which was used within an IHI Perioperative Safety Collaborative. According to verbal feedback from the organisations that used it, this methodology improved the detection of adverse events in surgical patients. The group of hospitals in the IHI Perioperative Safety Collaborative detected surgical adverse events at levels similar to those reported in studies published by organisations that have conducted internal studies of the occurrence of adverse events among their surgical populations.18 Although there is variation in these other studies, most show a minimum occurrence of SAEs in the range of 12–20% of patients, with some showing higher percentages when more minor events are included or when patients with multiple events are counted more than once. The severity of the SAEs matched those seen in other studies.1920
The epidemiology of surgical adverse events in surgical populations from participating hospitals was similar to what has been described in other studies, with surgery-related events such as operative-related injury, bleeding related to surgery, and wound infections being most common in this category.2122 Teams also found a variety of postoperative complications such as pulmonary complications, gastrointestinal complications, cardiac arrhythmias, and fluid and electrolyte disturbances, as seen in other studies.
Many of the organisations in this IHI Collaborative reported that they found much higher numbers of SAEs in their surgical populations with the Trigger Tool approach than they had detected with their existing reporting systems. Most organisations had relied on traditional incident-based voluntary reporting systems and surgical peer or morbidity and mortality conferences review to detect SAEs. As has been observed in many Trigger Tool studies,15 those traditional approaches substantially under-detect SAEs. Indeed, one organisation that participated in the initial testing phase of this work selected records for Trigger Tool review that had already been reviewed by its surgical peer review committee and had been determined to have no quality issues. When these same records were reviewed using the Surgical Trigger Tool, 15% were determined to have surgical adverse events. As a result, the organisation decided to revamp its surgical peer review process to include the Surgical Trigger Tool. Many current methods exist for identifying SAEs, such as peer review, voluntary reporting, manual and electronic surveillance, but the value of the Trigger Tool approach may be in its relative simplicity, practicality and limited resource requirements compared to these other methods. Clearly much room exists to build better detection systems for SAEs.
This study has several limitations that may impact its generalisability. First, it was conducted within an established quality improvement project led by the IHI, and as such its participant organisations may not be representative of a broad sample of hospitals in the USA, either in spectrum of care provided or in experience with quality improvement. Second, each hospital employed its own team to use the IHI Surgical Trigger Tool, and although the IHI provided extensive training in the use of the Trigger Tool, it made no attempt to measure the inter-rater reliability of each hospital team. Third, as there is no gold standard for detection of SAEs, it was not possible for us to estimate sensitivity, specificity or positive predictive values for the Surgical Trigger Tool. Finally, although the IHI did provide general guidance on the definitions and severity of harm, detailed explicit criteria were not provided for all different types of SAEs. It was therefore not possible to measure “intrarater” reliability; that is, although the different review teams used standard definitions and detection approaches, they did not measure the consistency of their SAE detection team approach across hospitals, which made comparisons to other organisations’ rates of SAEs highly problematic.
As the number of surgical procedures performed annually in the USA continues to increase, the incidence and severity of potential complications will also probably increase. Better methods for detecting these complications are therefore needed to determine whether surgical quality improvement initiatives actually reduce harm to patients. The Trigger Tool approach may offer a practical and easy-to-use method for detecting safety problems in patients undergoing surgery, and hence provide the basis not only for estimating the frequency of adverse events in an organisation, but also for determining the impact of interventions that focus on reducing adverse events in surgical patients.
Funding: The Institute for Healthcare Improvement (IHI) sponsored all work related to the development and testing of the Surgical Trigger Tool, including the Perioperative Safety Collaborative. FG is a full-time employee of the IHI.
Competing interests: DC is an employee of First Consulting Group (a technology services company) and holds stock in the company. He is an adviser and hold stocks in Theradoc (a medical software company), and was contracted as a faculty member for the IHI Perioperative Safety Collaborative.