Intended for healthcare professionals

Editorials

Assessing the quality of hospitals

BMJ 2010; 340 doi: https://doi.org/10.1136/bmj.c2066 (Published 20 April 2010) Cite this as: BMJ 2010;340:c2066
  1. Nick Black, professor of health services research
  1. 1London School of Hygiene and Tropical Medicine, London WC1E 7HT
  1. nick.black{at}lshtm.ac.uk

    Hospital standardised mortality ratios should be abandoned

    The quality of care provided by hospitals needs to be assessed objectively not only to stimulate clinicians and managers to make improvements but also to ensure public accountability, to enable patients to make informed choices, and to facilitate informed commissioning. Given the importance of all these activities, the measures used to assess quality must have sufficient validity and reliability. This is not true of the main measure being used in many countries, including the United Kingdom, the hospital standardised mortality ratio (HSMR).1 Before considering practical and methodological shortcomings in England, many of which are considered by Lilford and Pronovost in the linked article (doi:10.1136/bmj.c2016),2 the concept of using hospital deaths to judge the performance of a hospital needs to be considered.

    A consequence of a failure to provide alternative forms of care has been that hospitals have taken on the role of providing a place for people to die. About half of us will end our days in a hospital bed. This makes it perverse to use a hospital’s mortality statistics to judge its quality of care, given that deaths are often an expected and accepted outcome. The incongruity of using mortality to assess a hospital is exacerbated by geographical variation in the proportion of deaths that occur in hospital (40-65%), which reflect not only the availability of alternative forms of end of life care, such as hospices and community palliative services, but also cultural, religious, and socioeconomic characteristics of the local population. It is no surprise to find that the higher the proportion of all deaths in a population that take place in hospital, the higher that hospital’s HSMR will be.3

    Aside from the inappropriateness of the concept of using death as a measure of hospital quality, several practical problems arise from shortcomings in the data used to derive HSMRs. The first results from variation in the diagnostic behaviour of doctors and hospitals. This leads to problems if the method of calculating the HSMR does not include all causes (or diagnoses) of death, as is the case with one approach, which excludes 20% of deaths.4 The resulting HSMR will partly depend on whether or not a death is ascribed to an included or excluded diagnosis.

    Secondly, many secondary diagnoses (comorbidities), which are crucial for case mix adjustment, are commonly missing from hospital episode statistics, and those that are included often contain inaccuracies: in 2007-8, on average 17% were wrong, with an interquartile range of 8% to 26%.5 The third, and most serious shortcoming of the data, is the failure of hospital episode statistics to recognise that many patients who die were admitted for end of life care. On average, hospital trusts report that only 4.5% of their patients who die were in this category, with many hospitals reporting none.6 Even the hospital that reported the most (22%) in 2007-8 may have underestimated this figure, because a detailed review of case notes at one trust showed that the true proportion was 37%. When this was taken into account its HSMR fell dramatically from 105 to 68.

    Given these shortcomings in the data, it is no surprise to see how unstable HSMRs are as a measure. Recent reports that hospital mortality fell by an unbelievable 7% in only 12 months overall in England, and in some hospitals by more than 30%, shows the lack of validity of HSMRs.4 Their validity is also undermined by the finding that different methods for deriving HSMRs produce different results. For example, the mortality ratio for 2007-8 for Basildon and Thurrock Hospital derived by one company was 132 compared with 107 when derived by another.7

    Despite international support for using HSMRs to determine the quality of hospitals, particularly among policy makers and regulators, the validity of this measure has not been established. Indeed, it has barely been investigated. This could be done by comparison with more detailed in-depth methods of determining the quality of clinical care. Meanwhile advocates of HSMRs question whether its accuracy matters. They claim that, regardless of all the shortcomings, the publication of HSMRs is justified because it stimulates hospitals to improve their performance. To support this they cite examples of secular changes in HSMRs, ignoring the fact that these apparent improvements do not distinguish between data artefacts (such as changes in coding practice and admission and discharge policies) and real improvements. This cavalier approach ignores the danger of unjustified and unfair criticism of hospitals, with the attendant risks of damaging staff morale and public confidence.8 In addition, it may risk undermining staff and public confidence in quality assessment in general, encouraging scepticism about whether performance can ever be measured accurately.

    Some, although not all, of the shortcomings of HSMRs have been recognised by regulators who advocate their use,4 9 and by the recent inquiry into poor care at Mid Staffordshire NHS Foundation Trust.10 The proposed solution to methodological shortcomings is believed to lie in achieving a consensus among advocates on how to calculate HSMRs, while the problem of the public, managers, and the media misinterpreting the agreed measures is to be solved by improved understanding of their indicative rather than definitive role. Such caution is to be welcomed but is unlikely to be realised given the nature of the news media.

    The inadequacies of HSMRs and the potential harm they may cause does not mean that we must abandon our attempts to measure the quality of hospital care. Instead we should turn to the increasing number of more specialised sources of data, in particular those established for national clinical audits. Although these databases do not cover all conditions or services, they could provide meaningful comparisons of hospitals for many services such as myocardial ischaemia,11 critical care,12 lung cancer, trauma, and renal replacement therapy. An additional benefit is that many of them consider outcomes other than death (including morbidity and disability) and aspects of the process of care. A shift to this approach would gain the credibility and support of clinicians and provide a much richer and more valid account for the public of how a hospital was performing. These benefits have been recognised by the Department of Health in England with the introduction of hospital quality accounts from April 2010, one aim of which is to enhance hospitals’ participation and use of these data sources. This should be accompanied by the abandonment of HSMRs, which are not fit for purpose.

    Notes

    Cite this as: BMJ 2010;340:c2066

    Footnotes

    • Analysis, doi:10.1136/bmj.c2016
    • Competing interests: The author has completed the Unified Competing Interest form at www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declares: (1) No financial support for the submitted work from anyone other than his employer; (2) No financial relationships with commercial entities that might have an interest in the submitted work; (3) No spouse, partner, or children with relationships with commercial entities that might have an interest in the submitted work; (4) He chairs the National Clinical Audit Advisory Group, a non-governmental public body that advises the Department of Health in England, and is a trustee of the Intensive Care National Audit and Research Centre.

    • Provenance and peer review: Not commissioned; externally peer reviewed.

    References