Article Text

Download PDFPDF

Estimating deaths due to medical error: the ongoing controversy and why it matters
Free
  1. Kaveh G Shojania1,
  2. Mary Dixon-Woods2
  1. 1Department of Medicine, Centre for Quality Improvement and Patient Safety, University of Toronto, Toronto, Ontario, Canada
  2. 2Cambridge Centre for Health Services Research, University of Cambridge, Institute of Public Health, Cambridge, UK
  1. Correspondence to Dr Kaveh G Shojania, Sunnybrook Health Sciences Centre, Room H468, 2075 Bayview Avenue, Toronto, Ontario, Canada M4N 3M5; kaveh.shojania{at}sunnybrook.ca

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

One important reason for the widespread attention given to the 1999 US Institute of Medicine (IOM) report To Err Is Human1 lie in its estimate that medical error was to blame for 44 000–98 000 deaths each year in the US hospitals. This striking claim established patient safety as a public concern, strengthened the case for improving the science underlying safety and motivated providers, policymakers, payers and regulators to take safety seriously. Some did express disquiet about the validity of the figures cited,2 including one of the principal investigators of the two studies that provided the data for these estimates.3

A decade and a half later, Makary and Daniel4 attribute an even higher toll to medical error: 251 454 deaths in US hospitals per year, making, they say, medical error the third-leading cause of death in the USA. Unsurprisingly, this claim generated widespread coverage in multiple media channels. It also ignited scientific controversy about the basis of the estimate and the role of mortality as a patient safety indicator (PSI). In this paper, we address this controversy and why it matters. We propose that the new estimate is very likely to be wrong. Not only is it wrong, it risks undermining rather than strengthening the cause of patient safety.

The new paper is not a study

Though the paper by Makary and Daniel was widely cited as ‘a study’, it presented no new data nor did it use formal methods to synthesise the data it used from previous studies. The authors simply took the arithmetic average of four estimates since the publication of the IOM report, including one from HealthGrades,5 a for-profit company that markets quality and safety ratings, a report from the US Office of the Inspector General (OIG)6 and two peer-reviewed articles (table 1).7 ,8 The paper did not apply any established methodology for quantitative synthesis nor did it include a discussion either of the intrinsic limitations of the studies used or of the errors associated with the extrapolation process. To bolster their claims, Makary and Daniel did highlight the agreement between their estimates and that of a similar analysis published a few years ago by James.9 The apparent consensus is not, however, surprising, since they use mostly the same studies (listed in table 1, together with a more recent analysis commissioned by the Leapfrog group10).

Table 1

Studies generating estimates of deaths due to medical error*

Issues with the studies on which estimates of deaths due to medical error are based

Some of the widely quoted estimates of deaths due to medical error, including the IOM estimates,1 Makary and Daniel4 and James,9 are based on studies that in fact did not set out to estimate the rate of mortality linked to medical error. Instead, these primary studies sought to measure the prevalence of harm from medical care (ie, adverse events).

Consistent with their primary purpose, these studies included no methodology for making judgements about the degree to which adverse events played a role in any deaths that subsequently ensued. For instance, a patient admitted to the intensive care unit with multisystem organ failure from sepsis might develop a drug rash from an antibiotic to which he has exhibited a past allergic reaction. This patient has certainly experienced a preventable adverse event. But, if the patient eventually dies of progressive organ dysfunction a week after the antibiotic was changed, the medical error probably did not cause the death. An error that has occurred close to a death is not a sufficient basis for concluding that the error is the cause of death. Yet these studies do not have an explicit methodology for handing this situation—for distinguishing deaths where error is the primary cause from deaths where errors occurred but did not cause a fatal outcome.

A further problem with the basing estimates on studies that use adverse event and trigger tools of the type used by Makary and Daniel (and in the similar review by James9) is that they typically involve very small numbers of deaths. For instance, one study used a trigger tool approach to review 100 charts per quarter from each of 10 hospitals in North Carolina from January 2002 to December 2007.7 This study sought to detect any decline in adverse events that might have occurred as a result of patient safety efforts. In passing, the authors report that 14 adverse events were judged to have ‘caused or contributed to a patient's death’. These 14 deaths represented 0.6% of the total patients in the study. Similarly, one US government report included three preventable deaths;11 another reported 12.6 One of the widely quoted peer-reviewed studies identified nine deaths.8 Any extrapolation that generalises from so few deaths (14 or fewer) to so many (200 000–400 0004 ,9) surely warrants substantial scepticism.

The need for scrutiny is particularly important because when studies are designed specifically to identify preventable deaths, they typically report low rates. Studies that have reviewed inpatient deaths and asked physician reviewers to judge preventability have reported proportions under 5%, typically in the range of 1%–3%.12–15 The largest and most recent of these studies13 reported that trained medical reviewers judged 3.6% of deaths to have at least a 50% probability of being avoidable.

Epidemiological errors

While most of the studies used by Makary and Daniel did not have as their primary purpose the estimation of deaths due to medical errors, the estimate from HealthGrades did. This estimate, and a more recent one from the Leapfrog group10 (table 1), uses a methodology that depends on combining the frequency of hospital-acquired conditions (HACs), such as central line bloodstream infections and pressure ulcers, with estimates for the mortality attributable to these HACs. Some of these HACs include the PSIs produced by the US Agency for Healthcare Research and Quality.16

Using this methodology, HealthGrades estimated 389 576 deaths due to preventable adverse events per year in the USA (circa 2000–2002). From 2000 to 2010, the annual rate of inpatient deaths in the USA ranged from 715 000 to 776 000,17 meaning that HealthGrades proposed that over 50% of all inpatient deaths are preventable. The more recent Leapfrog produces a somewhat lower estimate of avoidable deaths in US hospitals each year—206 021,10 with the lower number reflecting deliberate avoidance of double-counting deaths in patients who developed more than one HAC. Preventable conditions such as pressure ulcers, thromboembolism and healthcare-acquired infections occur, of course. They occur far more often than they should. They are profoundly distressing for patients and their families. But, saying that they account for 30%–50% of inpatient deaths flies in the face of clinical experience. It is likely instead that many patients die with, rather than of, these conditions.

How could the estimated death toll be so wrong?

The PSIs are problematic as a basis for estimating mortality rates attributable to healthcare-acquired conditions. A recent systematic review found that all but one of these PSIs have a positive predictive value of <80%18 and some developers of the indicators acknowledge that they have only moderate validity.19 A more fundamental challenge is a basic epidemiological one: confounding. Patients at risk of HACs are also those at increased risk of dying from their underlying conditions. For instance, in one analysis, patients who developed Clostridium difficile infection had a significantly higher baseline risk of death than did patients who never developed this HAC (8.0% vs 1.8% baseline risk).20 This type of confounding makes it very hard to allocate aliquots of blame to failures of medical management versus the patient's underlying illnesses. Thus, when errors are followed by death, it is only rarely straightforward to adjudicate the extent to which error contributed to death.

If we acknowledge that the contribution that error makes to a particular fatal outcome is highly variable and often one of many factors implicated in a death, the fallacy of comparing deaths due to medical error with deaths due to the causes currently listed in the Center for Disease Control's current system becomes clear. That classificatory system lists causes of death firmly based either on tightly defined proximal causes (eg, suicide, currently the 10th leading cause of death in the USA) or on well-specified physiological disease processes (eg, heart disease, the first-leading cause21). Deaths due to medical error, on the other hand, have no such well-bounded properties: their definition(s) is elusive and changes (often rapidly) over time22 and they are only rarely the direct cause (as, eg, when a patient is given a massive overdose of an anaesthetic agent).

All of the estimates we critique involve another basic epidemiological error: they use crude extrapolations to populations that were not included in the original studies. For instance, the two US OIG reports focused on Medicare patients hospitalised for at least 24 hours. The most common reason for hospitalisation—accounting for around 10% of the total—is delivery of a live newborn.23 Yet, Medicare eligibility depends on age (≥65 years) or having end-stage kidney disease. These populations include few patients likely to deliver a baby. Thus, the estimates of deaths due to medical error came from studies that did not include normal deliveries but are applied to a population (all US hospitalisations) in which this is the most common admission diagnosis. Similar issues apply to other populations such as psychiatric patients, rehabilitation stays and admissions for less than 24 hours. Even when these patients are excluded from the adverse event study,7 the extrapolation involves applying the risks of preventable deaths among medical and surgical patients to these much lower risk hospitalisations.

Why the fuss? What harm could come from estimating so many deaths due to medical error?

Around 700 000 deaths occur in US hospitals annually.17 Makary's and Daniel's estimate that over 250 000 of those are preventable implies that around a third (or more) of inpatient deaths result from medical error or preventable adverse events. If, as studies that have actually studied preventable deaths have concluded, 3.6% is the more correct rate, then it means something like 25 000 deaths might be averted each year by eliminating medical error—a far cry from 251 454. Every life avoidably lost is a tragedy. No one is disputing the need to improve safety. But does this 10-fold difference in the death rate matter?

One could argue, as many did in the wake of the IOM estimate, that we should avoid picking apart estimates of mortality rates for medical error, since they draw attention to a much-neglected issue. This argument lies at the heart of one of the central tensions in the field of patient safety since its inception: the tendency to call for action on the basis of limited evidence.24–26 The urge to do so may arise from the desire to make up for lost time—the many decades of exclusive focus in medical research on discovering new tests and treatments while neglecting the basic duty not to harm patients. But, it is an urge that must be kept in check.

First, patient safety needs to establish its scientific credentials. It does the field no favours if the basic epidemiological facts cannot be trusted. For a start, it means that the metrics of progress will be constantly disputed.27 After 16 years of sustained attention to improve patient safety and other aspects of healthcare quality, it is disappointing to find that some are claiming that the problem is fivefold bigger than previously announced. We need reliable measures of progress over time, just as any field does. If ‘anything goes’ as far as the metrics are concerned, we have no hope of demonstrating that all the investment and effort in patient safety are worth it—thus, discouraging further investment.

Second, the narrow focus on preventable death risks distracting attention from the many harmful consequences of failures to manage risks adequately in healthcare that do not result in death.2 ,3 ,12 ,28–30 Just as most deaths do not involve medical error, most medical errors do not produce death—but they can still produce substantial morbidity, costs, distress and enduring suffering. Highlighting preventable deaths as the focus of patient safety efforts risks drawing resources away from many safety problems and many settings of care—including most non-hospital environments—where death is not the most relevant outcome. For instance, medication safety is universally regarded as one of the largest categories of safety problems; yet drug errors, though very common, do not usually result in fatal outcomes.31 Most pressure ulcers do not result in death, but they are a painful and miserable experience for patients. Does this mean that medication safety and pressure ulcers should not receive attention? Of course not. But, this is the risk of repeatedly focusing the attention of the public and policymakers on death as the sole outcome of interest.

The bottomless well of medical error

In listservs and blogs discussing the controversy over deaths due to medical error, we have encountered responses to any criticisms of the estimated death toll that take the form: “But those numbers don't even include…deaths due to unnecessary care, diagnostic errors, excessive radiation from overuse of radiologic investigations …”. In other words, the argument amounts to, “Even if the analysis did have some problems, it didn't include other important types of deaths due to medical error. So, the number is probably still about right”. These additional potential causes of death due to medical error have some legitimacy. For instance, one of us (KGS) has estimated that about 5% of deaths in US hospitals involved missed diagnoses that, had they been detected prior to death, might have altered the fatal outcome.32 Chart review studies of preventable deaths make these potential deaths due to misdiagnosis difficult to identify since so few autopsies occur in most US hospitals.

That said, this is a very different approach in estimating deaths due to medical error from that of extrapolating from adverse event studies. This approach starts with identifying all the important types of medical errors that we can think of—diagnostic errors, underuse of beneficial therapies (eg, failure to follow guidelines for the management of coronary artery disease), overuse of non-beneficial ones and so on. Then, to generate a total, it combines the frequency of these errors with estimates of how often each causes death. Even putting aside the speculative nature of many of the inputs to such an estimate, this approach almost certainly hugely overestimate mortality attributable to error. A patient can have a diagnostic error in connection with one aspect of their care, a medication safety problem with another, and not receive guideline-concordant care for yet another condition. Each of these categories of medical error may have an associated attributable mortality. Yet, the patient can only die once. Adding up the attributable mortalities for every type of error will substantially overestimate deaths due to errors.

Another problem with “But we didn't even include A, B, and C when we counted up all the deaths due to medical error” is that this approach is unevenly applied. The same reasoning is not so assiduously pursued for other leading causes of death—arguing, for example, that many deaths from heart disease, stroke and kidney failure include cases of diabetes, which would therefore make it the leading cause of death.

If passionate advocates for reducing medical error want constantly to redraw the boundaries of death due to X, others who are passionate about different diseases and determinants of health will do likewise—an arms race of who can count the most deaths due to the object of their advocacy. We appreciate the need for passion to capture attention and kick-start efforts to improve healthcare. But, at some point, we need to roll up our sleeves and do the patient, scientific work of characterising the target problems and evaluating our progress over time. Constantly expanding the boundaries of what counts as a death due to medical error will not serve that goal, but improved reliability and validity will.

Conclusion

One of our missions as editors of BMJ Quality and Safety has been to elevate the scientific standards of efforts aiming to measure or improve safety and healthcare quality more broadly. Laudable enthusiasm for the goal to reduce suffering has always had to be tempered with adherence to rigorous methods. We do not want to disseminate ineffective patient safety strategies any more than we want inadequately tested new medications or surgical treatments. We also do not want to alienate key clinical partners in efforts to improve patient safety. Given their everyday clinical experiences, most healthcare professionals will strain to believe that their efforts to help patients in fact account for one-third of all hospital deaths. Given the basic flaws in the estimates that we and others have identified, it is not clear on what basis they could be persuaded otherwise. Parading dubious statistics instead has the effect of disengaging clinicians from what may appear to be a field lacking in credibility, damaging their confidence in interventions intended to improve safety and threatening professional-patient relationships.

We are deeply committed to improve patient safety and quality of care. Avoidable deaths and suffering can best be reduced by improving the evidence base and that must start with sound epidemiology. Without this, over time implausible estimates of deaths due to medical error will do more to erode the cause of patient safety than headline-friendly figures will do to help it.

References

Footnotes

  • Contributors KGS and MD-W contributed to the conception of the paper; they critically read and modified subsequent drafts and approved the final version. They are both editors at BMJ Quality and Safety.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; internally peer reviewed.

Linked Articles

  • Original research
    Semira Manaseki-Holland Richard J Lilford Jonathan R B Bishop Alan J Girling Yen-Fu Chen Peter J Chilton Timothy P Hofer The UK Case Note Review Group The UK Case Note Review Group Victoria Alner John Bleasdale Nick Bosanko Alison Brind Kit Byatt Martin Carmalt Bryan Carr Rahulan Dharmarajah Edmund Dunstan Mair Edmunds Haleema Hayat Grant Heatlie Sauid Ishaq Stuart Hutchinson Sarah Kelt Jayanta Mukherjee Naveed Mustafa Kanwaljit Sandhu Viswanathan Senthil Andrew Stewart Kumaresh Venkatesan Gordon Wood