Intended for healthcare professionals

Letters

Performance league tables

BMJ 2002; 324 doi: https://doi.org/10.1136/bmj.324.7336.542 (Published 02 March 2002) Cite this as: BMJ 2002;324:542

League tables are unreasonably simple

  1. Jonathan Howell (jonathan.howell{at}lycos.com), consultant in public health medicine
  1. South Staffordshire Health Authority, Stafford ST16 3SR
  2. Walsgrave Hospital, Walsgrave CV2 2DX
  3. Department of Cardiothoracic Surgery, Wythenshawe Hospital, Manchester M23 9LT
  4. University of Bristol, Southmead Hospital, Bristol BS10 5NB
  5. 1 Binfield Rd, London SW4 6TB

    EDITOR—Not comparing like with like is the easy and traditional battle cry of those seeking to cast doubt on league tables of health service providers. It seems unfortunate therefore that the tables published for the benefit of the public in the Times as the hospital consultants' guide fall at the first hurdle on what seems to be a technical misuse, based on misleading comparisons, of one of the key statistics. 1 2

    Ranking in the league tables is based on both standardised mortality ratios and death rates per 100 000, although these summarise some complex statistical workings.3 Standardised mortality ratios are a seemingly well understood means of comparing the mortality of a local population with that of a wider population, taking into account the age and sex distribution. But the Times supplement misleadingly refers to a standardised figure for mortality ratios of 100 as the national average, a higher figure indicating a higher than average number of deaths. Although this statement might be broadly true, it is also likely to produce biased tables as it misrepresents standardised mortality ratios and misuses this statistic.

    Comparing standardised mortality ratios seems intuitive and looks reasonable until one unpicks their construction. A standardised mortality ratio uses the exposed group as the standard, meaning that the wider or national group is not the standard, which is probably where the misperception occurs. Therefore, comparisons of standardised mortality ratios with one another are invalid unless the age and sex distributions of the populations concerned are similar. The extent of the bias in making these comparisons may be small unless there are reasonably large departures from this point, but we do not know how much this departure for any one population differs from another and contributes towards its position in the table.3

    This point has been raised before both in relation to Dr Foster's league tables and more generally. 4 5 It may be that Dr Foster's tables have some good statistical validity but I find it difficult to tell. There is a good argument to suggest that those participating in furthering public health with good information should stop using standardised mortality ratios. Often we try to represent highly complex issues with simple figures. In these cases we should either avoid using summary figures that require the statistical rules to be bent or acknowledge that simplifying to this sort of degree does not reflect the reality.

    References

    1. 1.
    2. 2.
    3. 3.
    4. 4.
    5. 5.

    Use of language should be more careful in describing league tables

    1. M F Shiu (Man-Fai.Shiu{at}wh-tr.wmids.nhs.uk), consultant cardiologist
    1. South Staffordshire Health Authority, Stafford ST16 3SR
    2. Walsgrave Hospital, Walsgrave CV2 2DX
    3. Department of Cardiothoracic Surgery, Wythenshawe Hospital, Manchester M23 9LT
    4. University of Bristol, Southmead Hospital, Bristol BS10 5NB
    5. 1 Binfield Rd, London SW4 6TB

      EDITOR—As a cardiologist working in the hospital with the highest overall heart bypass mortality, I note the injudicious use of terms such as health ghettoes and excessive deaths in most commentaries on league tables. This use of language creates undue alarm among the public.1

      Hospitals with higher surgical mortality tend to be larger hospitals with a higher throughput of cases and surgeons who accept patients at higher risk. League tables can give a true picture only if all units adopt the same selection criteria and operate on similar patients. Each surgical centre and individual surgeon tends to adopt their own threshold for patients at high risk, which would affect the centre's overall mortality. Dr Foster claims that age has been taken into their model for adjusting standardised mortality. In reality, the Society of Cardiac Surgeons accepts that even highly sophisticated models cannot predict accurately operative mortality, in particular for the patients at higher risk. Dr Foster uses a model with simple variables, and one of many deficiencies in this model is that the data that define the degree of urgency of operations are not collected. Without deploying these variables for risk adjustment, they cannot claim that any deaths are excessive.

      Statistics on non-emergency operations show that our surgeons are second to none in their skill. The higher overall mortality can be explained by the fact that we and the referring hospitals have asked our surgeons to operate on older patients at higher risk who have been turned down by other centres. We stand by our practice since we know that without an operation, these patients would have had a much lower chance of survival.

      The immediate impact of publication of such league tables will lead to many such patients being turned down for surgery. Hospitals with lower mortality cannot be complacent, least of all proud, of their results, unless they can show that their surgeons are as willing to take on high risk cases. Most surgeons will now adopt more defensive practices turning away higher risk patients, and we will never find out how many patients will die or suffer as a result—their statistic will never appear in any league table. Elderly sick patients are particularly vulnerable.

      If we have to live with league tables, Dr Foster should also publish detailed information of case mix and volume—a complete picture of patient profiles alongside surgical deaths to allow the public to make informed choices without undue alarm.

      References

      1. 1.

      Publication of league tables needs to be open and accurate

      1. B Bridgewater, consultant cardiothoracic surgeon,
      2. T Hooper, consultant cardiac surgeon,
      3. C Campbell, consultant cardiac surgeon,
      4. M Jones (mark.jones{at}gw.smuht.nwest.nhs.uk), consultant cardiothoracic surgeon,
      5. J Carey, consultant cardiac surgeon,
      6. P Waterworth, consultant cardiac surgeon,
      7. A Deiraniya, consultant cardiothoracic surgeon,
      8. N Yonan, consultant cardiac surgeon
      1. South Staffordshire Health Authority, Stafford ST16 3SR
      2. Walsgrave Hospital, Walsgrave CV2 2DX
      3. Department of Cardiothoracic Surgery, Wythenshawe Hospital, Manchester M23 9LT
      4. University of Bristol, Southmead Hospital, Bristol BS10 5NB
      5. 1 Binfield Rd, London SW4 6TB

        EDITOR—Vass's news item urges caution in interpreting Dr Foster's league tables, which show South Manchester University Health Trust second from the bottom.1 We in this trust support initiatives to inform the public about health outcomes and contribute to the register of the Society of Cardiothoracic Surgeons. Unit-specific results are available on our website (2 These indicate that our performance is satisfactory and are at odds with Dr Foster's publication.

        The analysis performed by Dr Foster used statistical data on hospital episodes, which are designed for contracting and activity purposes and are well known for their inaccuracy. These data have been analysed with a risk algorithm that has not been subjected to independent validation.

        Dr Foster states that it provides independent, authoritative, health information. It is overseen by an ethics committee, whose role is to ensure responsible, accurate use of data. It prides itself on listening to interested parties and emphasises communication before publication.

        We first heard about Dr Foster's initiative by a circuitous route. No direct contact was made with our trust. Despite close relations between the Department of Health and Dr Foster, the Department of Health has not questioned our performance and has disseminated analyses showing satisfactory outcomes. Comparing data from Dr Foster and the Department of Health shows good correlation for most units, but our trust performed significantly worse on the Dr Foster analysis for reasons that we do not understand. This must cast doubt on their methods. Interestingly, the league table published in the BMJ showed hospitals ranked according to Dr Foster's analysis rather than that of the department, or indeed the Society of Cardiothoracic Surgeons, which was not mentioned (http://www.scts.org/).

        There is a political agenda for openness, but funding of the NHS falls short of European averages and the proportion of this spent on information technology is not compatible with generating accurate information.

        Being listed inappropriately low down in a league table creates anxiety for patients and relatives and is damaging for staff morale, recruitment, and retention. This is important given the current underprovision of cardiac services and our desire to fulfil the revascularisation targets of the national service framework. Additionally, there are implications for cardiological referral practice: patients at high risk will be denied operations as surgeons strive to keep their noses clean.

        References

        1. 1.
        2. 2.

        Dr Foster's ranking of hospitals in good birth guide is misleading

        1. Peter M Dunn, emeritus professor of perinatal medicine and child health
        1. South Staffordshire Health Authority, Stafford ST16 3SR
        2. Walsgrave Hospital, Walsgrave CV2 2DX
        3. Department of Cardiothoracic Surgery, Wythenshawe Hospital, Manchester M23 9LT
        4. University of Bristol, Southmead Hospital, Bristol BS10 5NB
        5. 1 Binfield Rd, London SW4 6TB

          EDITOR—Vass reports that doctors' organisations have urged that great caution be taken in interpreting the Dr Foster hospital guides prepared by Sir Brian Jarman's team and published in the Times. 1 2 They attempt to compare the performance of hospitals in various disciplines. The warning is justified since the information provided is insufficient to permit meaningful conclusions. Rather than increase public awareness and understanding, as Sir Brian hopes, the guides are likely to confuse, mislead, and cause anxiety.

          On 15 July the Times published Dr Foster's good birth guide.3 Region by region, maternity hospitals were listed in order of merit. Having undertaken a regional survey of maternity services, I am well aware of the complexity of comparing different hospitals' performances. On close inspection of the Dr Foster league tables I discovered that the published order of merit had been determined just by ranking hospitals according to the number of births per midwife per year, fewer births being classed as better.

          Although adequate midwife staffing is not unimportant, it is absurd to grade the quality of care between hospitals on this single factor. Such a presentation is misleading to the point of irresponsibility. It could be argued that the fewer births per midwife per year might even indicate a hospital's unsatisfactory reputation.

          References

          1. 1.
          2. 2.
          3. 3.

          NHS is national but not uniform

          1. Tom Aslan (t.aslan{at}lse.ac.uk), partner in general practice
          1. South Staffordshire Health Authority, Stafford ST16 3SR
          2. Walsgrave Hospital, Walsgrave CV2 2DX
          3. Department of Cardiothoracic Surgery, Wythenshawe Hospital, Manchester M23 9LT
          4. University of Bristol, Southmead Hospital, Bristol BS10 5NB
          5. 1 Binfield Rd, London SW4 6TB

            EDITOR—The article by Adab et al on performance league tables for the NHS presents a good argument for the use of control charts in place of league tables.1 Charts seem more understandable and are less likely to cause confusion. The statistical problems of league tables are well put and valid.

            I disagree with Adab et al as the NHS cannot be regarded as a single uniform organisation. The data from Dr Foster identified that staffing levels greatly affect mortality. Trusts differ in their staff retention rates and policy, and they do not always attract the same quality of applicants. From this point of view, comparing one trust with another may be more similar to comparing Ford with Honda than looking at different units in the same company.

            This does not detract from the use of control charts, but it is important not to view the NHS as adhering to a uniform pattern as trusts differ in their priorities, incentives, and abilities. As an outcome measure mortality is still too rare an event to be very sensitive and, no matter how it is presented, will therefore not be very informative. New more sensitive outcome measures need to be developed.

            It is also a mistake to look at outcomes without looking at use of resources. As an example, if comparing two coronary bypass units it is not sufficient to know the mortality at 30 days for each unit without also calculating the costs per patient of each unit. This has been the gaping hole in most of the recent published data, including those from the Dr Foster team.

            References

            1. 1.