Intended for healthcare professionals

Editorials

Measuring the quality of hospital care

BMJ 2009; 338 doi: https://doi.org/10.1136/bmj.b569 (Published 18 March 2009) Cite this as: BMJ 2009;338:b569
  1. John Wright, director1,
  2. Kaveh G Shojania, director 2
  1. 1Bradford Institute for Health Research, Bradford Royal Infirmary, Bradford BD9 6RJ
  2. 2Centre for Patient Safety, University of Toronto, Toronto, ON, Canada M4N 3M5
  1. John.wright{at}bradfordhospitals.nhs.uk

    Should focus on effective learning and improvement rather than judgment

    Measuring the quality of hospital care is a thorny business. Health care is complex, and links between clinical practice and patient outcomes are often tenuous and distant. These challenges have not prevented the pursuit of simple indicators that identify the “good” and “bad” hospitals. With the claim of providing such an indicator, the hospital standardised mortality ratio was launched with considerable fanfare—first in the United Kingdom and then in Europe and North America. The ratio identifies hospitals where more patients die than would be expected on the basis of their case mix—the bad hospitals—and hospitals with fewer deaths than expected—the good hospitals.1 2

    The attraction of using the hospital standardised mortality ratio is clear. These ratios focus on a clear and important clinical outcome and use routinely available data which are as good at predicting death in some conditions as expensive clinical databases.3 However, their use as indicators of quality and safety of care has been criticised because they may not adequately adjust for case mix or account for chance variation and because mortality may not be a valid indicator of quality.4 5 6 7 8 In the linked study (doi:10.1136/bmj.b780), Mohammed and colleagues look at a more technical but nonetheless fundamental concern—whether attempts to adjust for case mix augment rather than minimise bias in measuring hospital death rates.9

    Measures of risk may not be uniformly related to patient outcomes across all constituencies, a point sometimes called the “constant risk fallacy.”10 For example car ownership might seem like a reasonable indicator of financial status. Yet, using this measure as an economic adjustor may produce misleading results when applied to people living in rural areas, where car ownership reflects the need for transport more than it does income.10 Similarly, patterns of use of the emergency department may in some areas provide a reasonable measure of illness acuity, but in others it may say more about the availability of alternative settings for receiving urgent care.

    Mohammed and colleagues found systematic differences in the associations between hospital mortality rates and the factors used in the case mix adjustment for these ratios, including age, emergency admissions, and comorbidity.9 For instance, modest changes in comorbidity correlated with large differences in risk of death between hospitals.

    These findings undermine the credibility of standardised mortality ratios and indicate that their role in labelling hospitals as good or bad is unjustified. It should be recognised, however, that the widespread interest in these ratios reflects the historic inattention to quality measurement in health care. Practitioners have generally resisted performance measurement (or done little to facilitate it), and researchers have often focused on criticising proposed measures rather than developing valid alternatives.

    Measurements of processes rather than outcomes of clinical care have been proposed as more reliable measures of quality and safety.11 Measuring adherence to recommended processes of care seems to avoid the problems of adjusting outcomes for case mix, but many of the same problems arise in the form of eligibility criteria for these processes. For example, patients in hospital A may receive thrombolytic agents less often than those in hospital B because hospital A cares for more complex patients who have more contraindications to thrombolysis.

    Hospital standardised mortality ratios will continue to be used to monitor hospital performance because they use data that are widely available and cheap to collect. The methodological limitations highlighted by Mohammed and colleagues pose less of a problem for using these ratios to monitor and improve quality in individual hospitals because associations between case mix adjustors and risk of death should be relatively constant at individual hospitals over periods of a few years.1 12 Using hospital mortality ratios for individual hospitals may indeed encourage efforts to improve quality and safety of care. Studies have reported falls in hospital mortality ratios linked to quality improvement programmes that have used the ratios as outcomes.1 12 Measurement itself may encourage improvements in processes of clinical care that ultimately lead to improvements in quality and safety.

    Publicly reported quality measures require accuracy and precision to prevent unfair stigmatisation and loss of trust. However, the desire to avoid mislabelling hospitals cannot always trump the pursuit of useful measures of healthcare quality. As with diagnostic tests, we must balance the downsides of false positive results with the risk of missing the disease altogether. Rather than advocating the primacy of one quality metric over another, or reverting to “measurement nihilism” with scepticism and distrust about the use of clinical measurements, we should continue to identify a range of indicators that are appropriate for different clinical contexts. We should then concentrate on how these can support internal learning rather than legitimising unfounded and often counterproductive external judgments.

    Notes

    Cite this as: BMJ 2009;338:b569

    Footnotes

    References