Article Text

Download PDFPDF

Trends in adverse events over time: why are we not improving?
  1. Kaveh G Shojania1,
  2. Eric J Thomas2
  1. 1Sunnybrook Health Sciences Center, University of Toronto Centre for Patient Safety, Toronto, Ontario, Canada
  2. 2Memorial Hermann Center for Healthcare Quality and Safety, University of Texas at Houston, Houston, Texas, USA
  1. Correspondence to Dr Kaveh G Shojania, Sunnybrook Health Sciences Centre, University of Toronto Centre for Patient Safety, Room H468, 2075 Bayview Avenue, Toronto, ON, Canada M4N 3M5; kaveh.shojania{at}sunnybrook.ca

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Trends in adverse events over time: why are we not improving?

With widespread interest and investments in patient safety in the 13 years following the US Institute of Medicine report To Err is Human,1 the question has understandably arisen: have we decreased medical harm? One widely cited study showed no significant reductions in either the overall rate of harm or the rate of preventable harm in 10 US hospitals chosen on the basis of patient safety activities.2 A second US study,3 though not focused on temporal trends, reported that one third of patients suffered harm from their medical care at three tertiary care hospitals recognised for their efforts in improving patient safety. Given that previous major studies reported adverse event rates in the range of 3–16%,4–10 progress seems sorely lacking.

Adding to this distressing picture, Baines et al11 report in this issue of the journal that the adverse event rate among hospitalised patients in the Netherlands increased from 4.1% in 2004 to 6.2% in 2008. Somewhat reassuringly, preventable adverse rate did not change. The increase in non-preventable adverse rates may reflect better documentation in medical records as a result of interest in patient safety, with the stable rate of preventable events suggesting that safety has not actually worsened. Nonetheless, the main message of this study11 and the two previous ones2 ,3 remains: sustained attention to patient safety has failed to produce widespread reductions in rates of harm medical care.

Why has patient safety not improved?

First, while patient safety and healthcare quality have certainly received substantial attention for more than 10 years now, the actual investments in patient safety still pale beside investments in traditional biomedical research. The US National Institute of Health has a budget of approximately $30 billion,12 roughly 60 times that of the US Agency for Healthcare Research and Quality.13

The ‘war on cancer’ announced by US President Richard Nixon in 1971 has consumed hundreds of billions of dollars. The US National Cancer Institute alone has spent $105 billion. Funding agencies in other countries, philanthropic donors, pharmaceutical companies and individual research centres have spent uncounted billions more.14 While some striking successes have occurred, the overall death rate from cancer has decreased only 5% since 1950.14

This disappointingly small impact has occurred over a much longer time than the patient safety era and with orders of magnitude greater financial investments. Moreover, the war on cancer had a tremendous head start, with decades of relevant research and a large scientific workforce. Patient safety began with nothing like the existing research base in physiology and molecular biology, nor anything like the number of people with the expertise (or interest) to develop and test patient safety interventions. That we have made little progress in a relatively short period of time, with modest resources by the standards of most major biomedical endeavours, and fewer people working on the problem should thus come as no surprise. We get what we pay for.

Second, showing progress in patient safety requires three achievements to have occurred:

  • Identification of interventions that reduce common types of adverse events.

  • Dissemination of (some of) these effective interventions into routine practice.

  • Development of a tool to measure improvements in patient safety problems.

Unfortunately, none has occurred. We have few effective patient safety interventions. Those that may be effective have not been widely adopted (or not adopted in an effective form). And, the gold standard instrument for measuring patient safety problems is probably too blunt to detect changes over time.

The paucity of effective patient safety interventions

Early in the patient safety movement, the US Agency for Healthcare Research and Quality commissioned a compendium of evidence reviews in order to identify promising patient safety interventions.15 Released in 2001 (an update will appear this year), this report met with some criticism from leaders in the patient safety field because of the priority given to very clinical interventions—strategies for reducing hospital-acquired infections, thromboembolism, perioperative complications, and so on—with much lower evidence ratings for patient safety strategies from high reliability industries or for information technology.16

The lead authors of that evidence report (including one of us) replied that clinical research studies related to patient safety were more numerous and rigorous than studies of computerised order entry, teamwork training, interventions to improve safety culture, and so on.17 The debate over which patient safety interventions to pursue came down to whether we ought to prioritise evidence-based interventions that target specific complications of care or broader strategies with the potential to reduce multiple different types of patient safety problems, but for which we have less evidence of effectiveness. The first approach called attention to the benefits of venous thromboembolism prophylaxis, strategies for reducing common healthcare-acquired infections, and interventions to reduce postoperative complications. The second approach promoted information technology and lessons from high reliability organisations. These two potential approaches both have merit and do not necessarily imply any fundamental differences in what counts as evidence for effectiveness. Though, as it turned out, debates on this topic occurred as well.18 ,19

Interestingly, perhaps the most widely cited example of a success story in patient safety—the prevention of central venous catheter bloodstream infections—drew on these two debated approaches. The ‘central line bundle’ involved an aviation-style checklist and lessons about culture change and teamwork drawn from outside healthcare. But the elements of the bundle consisted of very concrete, evidence-based strategies for reducing a specific clinical problem.20 Across 103 intensive care units (ICUs) implementation of a ‘bundle’ of evidence-based practices produced a large and statistically significant reduction in infections, from a baseline mean of 7.7 infections/1000 catheter days to 1.4/1000 catheter days. Some ICUs virtually eliminated this problem, and a follow up study showed that the results seemed sustained.21

This study represented a landmark achievement in patient safety. No prior intervention explicitly developed as part of the patient safety movement had so thoughtfully combined the results of relevant clinical research with theories about effective change nor been evaluated on such a large scale. Yet, by the standards of traditional clinical research, the study had three notable limitations: no control group, outcome ascertainment that relied on decisions of clinicians to obtain blood cultures (obtaining fewer blood cultures by itself could lower event rates22), and loss of approximately 40% of the potential ICU-months of data.23 Moreover, a recent study aiming to replicate this initiative reported that the control and intervention groups achieved comparable improvements.24

So, the evidence supporting what seemed like the best example of progress in patient safety is mixed. And, so remains the evidence for the impacts of computer order entry and decision support,25 ,26 rapid response teams,27 ,28 medication reconciliation,29 duty hour limits for trainees, and strategies for improving patient safety culture.30 Teamwork training has produced at least one robust success—the demonstration of substantial improvements in risk-adjusted surgical mortality.31 However, teamwork training has yet to disseminate widely—certainly not in anything like the very intensive form seen in this study, in which participating centres prepared for months prior and operating rooms were closed to optimise participation of surgical staff.

The WHO's surgical checklist32 has disseminated widely, though ineffective implementation probably occurs commonly.33 Other widely disseminated patient safety interventions, such as medication reconciliation29 and duty hour limits for trainees,34–37 have limited supporting evidence. In the latter case, with the competing potential benefits of reduced fatigue and harms from increased hand-offs, the best one can confidently say is that patient outcomes do not seem to have worsened.38

Finally, even if the studies noted above were flawless and generalisable, we still have an extremely small amount of evidence when compared to other major causes of morbidity and mortality such as heart disease and cancer.

How would we know if we had improved?

In comparing adverse event rates in 20 Dutch hospitals in 2008 and with the rates observed in 2004, Baines et al11 used the gold standard approach in patient safety—nurses screened medical records using triggers for possible harms (eg, death, readmission, hospital acquired infections). Physicians reviewed trigger-positive records for the presence of adverse events. This method, first developed for a little known study funded by the California Medical Association,39 became famous after its adoption in the Harvard Medical Practice Study.4 It has served as the basis for subsequent major patient safety studies5–10 and also informed the development of the global trigger tool now in widespread use.3 ,40–42

Many other methods for identifying patient safety problems exist,43 ,44 each with advantages and disadvantages related to the types of problems they capture, their completeness or sensitivity, the degree to which lend themselves to calculating event rates, and the extent to which they facilitate improvement efforts by identifying causes of the harms identified. Retrospective medical record review probably does provide the best characterisation of the overall rate of harm at a given time. Why then, does it not provide a good method for detecting improvements in patient safety over time?

One fundamental problem is that adverse events represent a conceptual categorisation including heterogeneous event types—adverse drug events, healthcare-acquired infections, postoperative complications, delayed diagnoses, fall-related injuries, pressure ulcers, and so on. Even major categories, such as adverse drug events include distinct subcategories requiring different improvement interventions. Computerised order entry targets prescribing and transcription errors, but has no effect on medication administration or dispensing errors. Strategies for reducing falls will have no effect on pressure ulcers.

The ‘systems perspective’ in patient safety45 ,46 promises to identify cross cutting problems that contribute to diverse types of events—communication problems that contribute to some diagnostic delays, but also some medication errors and surgical complications. However, these deeper causal categories—communication, teamwork, human factors, organisational culture—are themselves heterogeneous. For instance, SBAR (Situation-Background-Assessment-Recommendation) may address certain types of dysfunctional interprofessional communications between but will not address frequent non-communication (eg, between different physicians caring for the same patient). One day we may have in hand interventions that address latent system problems and reduce multiple adverse event types. But, examples of such interventions remain few in number31 and have certainly not disseminated widely.

A second problem facing trigger tool type chart review for detecting improvments over time is more practical than conceptual. For some patient safety problems, specific triggers identify virtually all patients who experienced the adverse events of interest (eg, a trigger that captures positive assays for Clostridium difficile more than 48 h after admission will capture almost all hospital-acquired cases.) But, many common adverse events require more complex detection strategies. A few simple triggers will not capture all surgical site infections or clinically significant diagnostic delays. Trigger tool based chart reviews capture adverse events associated with basic (surrogate) outcomes, such as readmission, death, and unplanned admission to an intensive care unit. But, many surgical site infections (or diagnostic delays or adverse drug events) do not cause these signals.

What is the solution?

Detecting the modest improvements associated with most interventions will require targeted surveillance for the events targeted by effective interventions. If an intervention promises to reduce central venous catheter bloodstream infections or complications of surgery, then detecting an effect depends on measuring these specific outcomes over time, not performing periodic assessments for the presence of patient safety problems in general.

Targeted surveillance for specific adverse event types almost certainly has higher sensitivity for adverse events of interest, namely ones targeted by effective interventions. It probably also avoids the reliability problem that has plagued adverse event studies—the limited agreement between physician reviewers about which adverse events were preventable. For the focused measurement of specific adverse events, one need not judge preventability. One simply measures catheter-related blood stream infections (or post-operative complications, fall-related injuries, or whatever the case may be) before and after the intervention. The implementation of interventions designed to reduce these safety problems events itself speaks to preventability and makes all such events of equal interest.

We often refer loosely to a ‘cure for cancer,’ but cancer includes a wide range of distinct diseases. Treatments for one type of cancer often have limited or no effectiveness against another form of cancer. Thus, studies that seek to evaluate cancer treatments typically measure the incidence of specific cancers targeted, not the occurrence of all cancers. Similarly, documenting progress in patient safety requires measuring specific adverse events targeted by effective patient safety interventions, not periodic surveillance for adverse events in general. Showing the benefits of an effective hand-hygiene campaign, requires focused surveillance of healthcare associated infections.47 Periodic application of a general trigger tool will not have the power to detect to changes in infections. And, the overall adverse event rate will go down only if this hospital has also implemented effective strategies targeting multiple other event types.

Focused measurement of patient safety outcomes targeted by specific evidence-based interventions may provide the best way to show progress in patient safety. But, there will still remain a role for occasional general chart reviews. Given the ever increasing complexity of care—new therapies, diagnostic tests, changing models of care delivery, staffing levels frequently stretched to the limit—new types of errors and harms are constantly emerging. General detection methods may provide the first signals of such emerging safety problems. It may be that, in 10 years adverse event rates remain the same, but the event types have changed. That would count as progress. For now though, we still have our hands full developing interventions for the adverse event types we know about and documenting that they have decreased in frequency.

Acknowledgments

The authors thank Dr Chaim Bell for his helpful comments on a draft of this manuscript. Dr Shojania receives salary support from the Government of Canada Research Chairs programme.

References

Footnotes

  • Competing interests None.

  • Provenance and peer review Commissioned; internally peer reviewed.

Linked Articles