Intended for healthcare professionals

Editorials

The death of death rates?

BMJ 2015; 351 doi: https://doi.org/10.1136/bmj.h3466 (Published 14 July 2015) Cite this as: BMJ 2015;351:h3466
  1. Tim Doran, professor of health policy,
  2. Karen Bloor, professor of health economics and policy,
  3. Alan Maynard, emeritus professor
  1. 1Department of Health Sciences, University of York, York YO10 5DD, UK
  1. Correspondence to: T Doran tim.doran{at}york.ac.uk

Using mortality as a quality indicator for hospitals

The history of medicine can be characterised as a long struggle against ignorance and ineptitude.1 In revealing the complexity of human disease, medical science has chipped away at the former while increasing the risk of the latter. A profession that was once responsible for balancing four humours is now obliged to juggle over 14 000 discrete conditions. Fumbles are inevitable. But to what extent are they avoidable?

The National Health Service is still dealing with the fallout from the Francis inquiry into failings at Mid-Staffordshire NHS Foundation Trust.2 One of its first responses was to dispatch Bruce Keogh, the national medical director, to investigate other places of concern—hospitals with persistently high mortality rates.3 But was this the best way to identify rogues?

In a linked article (doi:10.1136/bmj.h3239), Hogan and colleagues measured the association between death rates reported by hospitals and the number of deaths they should have avoided.4 This is a vital piece of information. A strong link would mean that comparative mortality data could serve as a system-wide smoke alarm, providing administrators with an efficient means of monitoring quality of care across the entire health service. Without a link, this alarm becomes a misleading source of false alerts, subjecting outliers to unnecessary suspicion, over-inspection, and reputational damage.5 And far worse: hazardous hospitals lurking inside the funnel are assumed to be safe.

Hogan and colleagues report two main results, and the one follows from the other. Firstly, the proportion of hospital deaths judged by a panel of experts to be potentially avoidable was just 3.6%. Secondly, the association between standardised mortality rates and avoidable deaths was, unsurprisingly, non-significant within wide confidence intervals (regression coefficient 0.3, 95% confidence interval −0.2 to 0.7). The signal of avoidable death, it seems, is lost in the din of unavoidable noise. The research team concluded that the association was too weak for overall mortality rates to be an effective monitoring tool.

In apparent anticipation of Hogan and colleagues’ null finding, the Keogh review established plans to construct a national indicator for avoidable in-patient death, based on externally audited case note reviews.3 Unfortunately, and not for the first time, the current study also revealed the fallibility of even the most carefully structured case review. Despite the provision of extensive training and support, experienced clinical reviewers often disagreed on what constituted an avoidable death and were influenced by a range of extraneous factors. Improving reliability—for example, by engaging multiple reviewers for each case, would further increase the costs of what is already likely to be an expensive undertaking. Assuming this can be achieved, there are further problems. Given that 97% of patients survive their stay in hospital, this study implies that the proportion of admissions resulting in an avoidable death is around a 10th of 1%. Even the most accurate indicator of avoidable death would barely scratch the surface of suboptimal care.

How then to monitor and improve quality? Pioneer Ernest Codman’s approach was to look beyond adverse outcomes: he took the radical and deeply unpopular step of publishing not only his patients’ outcomes but also his judgments on whether the results could have been improved and the probable causes of failure to achieve “perfection.”6 A century on the medical profession is still not ready to fully embrace Codman, but it is at least prepared to flirt with him. Recent innovations include aviation-style voluntary incident (“near miss”) reporting,7 structured risk assessment,8 and the use of patient informants,9 but progress has been slow and results underwhelming. This is not because the NHS lacks for either quality initiatives (clinician revalidation; Commissioning for Quality and Innovation; Quality, Innovation, Productivity and Prevention; the NHS Outcomes Framework; the Quality and Outcomes Framework; the patient safety improvement programme), quality oriented bodies (Care Quality Commission; Monitor; National Institute for Health and Care Excellence; National Quality Board; National Patient Safety Agency; NHS IQ), or other organisations with quality within their remit (clinical commissioning groups; health and wellbeing boards; Local Health Watch; the royal colleges). The costs and effectiveness of all this activity—with the inherent potential for confusion, duplication, omission, and contradiction—are unknown, and the accretion of quality overseers suggests a desire to minimise regret rather than to maximise patient outcomes.

With this top-down hubbub, the best approach may be to follow Percival’s advice and start at the bottom: unite “tenderness and steadiness”10 by indoctrinating trainees in the medical professions with the principles of quality and compassion. Once these trainees emerge into practice there should be continuing and career-long support, with protected time for effective audit and reflection (including, but not limited to, revalidation processes), and properly aligned financial and reputational incentives. Clinicians, administrators, and policy makers also need to be statistically literate and able to distinguish common causes of variation from specifics using appropriate metrics. Rationalisation of candidate measures is required,11 and the evidence is mounting that there may be no future for summary mortality rates. However carefully they are adjusted, these rates do not account for recording errors, variation in risk across hospitals, variation in performance within hospitals, and the availability of alternative places where patients can die.12 Nor do they correlate with avoidable death.

Nevertheless, many within the NHS will find it difficult to accept that a tool that identified Mid-Staffordshire NHS Foundation Trust could fail to identify other sinners, and faith in funnel plots is likely to remain strong. Even apostates may find it difficult to let go in the absence of more cost effective alternatives. It will, after all, take a brave administrator to ignore an outlier.

Notes

Cite this as: BMJ 2015;351:h3466

Footnotes

  • Research, doi:10.1136/bmj.h3239
  • Competing interests: We have read and understood the BMJ policy on declaration of interests and declare the following: none.

  • Provenance and peer review: Commissioned; not externally peer reviewed.

References

View Abstract