Intended for healthcare professionals

Press Press

Reporting NHS performance: how did the media perform?

BMJ 2000; 321 doi: https://doi.org/10.1136/bmj.321.7255.248 (Published 22 July 2000) Cite this as: BMJ 2000;321:248
  1. John Appleby, director,
  2. Andy Bell, acting head of public affairs
  1. Health Systems Programme
  2. King's Fund, London

    As the NHS has learnt from winters past, it only takes a few high-profile cases of patients on trolleys for the media to dust off stories and headlines of “NHS in crisis.” Facts and figures are rarely allowed to spoil these seasonal horror stories. But when the figures are presented, how do the media react?


    Embedded Image

    Reports in most newspapers of the latest set of NHS performance indicators for England released last week made a reasonable attempt to convey some overall assessment of how the NHS is doing, but—like the professor of history when asked by a harassed radio reporter, “The crusades, good or bad?”—they were caught between the desire to oblige with a definitively succinct answer and the need to show at least some awareness of the real complexity of the issues.

    Of the eight major national daily newspapers, all bar one (the Mirror) devoted considerable space to the high level performance indicators (generally population health measures by health authority) and clinical indicators (a handful of measures covering hospitals). Much of the space was taken up by reproducing tables of indicators for individual hospitals and health authorities.

    Although the Department of Health avoided any attempt to provide an overall aggregate measure of NHS performance, the clear message from newspaper headlines was that the NHS is failing to meet one of its central goals, equity, both in terms of access to services and of population health. “National lottery of hospital survival,” “league tables reveal patchwork NHS,” and “the postcode difference between life and death” typified reports. No newspaper mentioned last year's indicators, in comparison with which most in this year's set show improvement. The emphasis instead was on variations, with the focus on those labelled “best” and those labelled “worst.”

    Just two newspapers carried leading articles about the figures. While the Sun praised the government for publishing the performance indicators, but argued that they should be more comprehensible, its stablemate, the Times, noted that one of the problems with the “phone book-sized bundle of figures” is that they are likely to be of more use to health professionals than to patients or the public. But if, as the Times leader urged, the NHS should expand publication to include the performance of clinical teams and individual hospital departments, there is clearly much presentational work to be done.

    Unfortunately, most newspapers performed poorly in presentational terms. A crucial statistical aspect of many of the indicators is the fact that there is a degree of uncertainty surrounding the figures. Although a couple of newspapers mentioned the words “confidence interval,” only the Daily Telegraph reproduced the actual intervals, but then failed to note how to interpret any overlaps between intervals. Given the league table approach to the indicators taken by most newspapers, the fact that it is impossible statistically to distinguish between most health authorities on most health measures undermined much editorial comment.

    However, virtually all newspapers picked up the fact that many indicators reflected not just the performance of the NHS but other factors too. In particular, the link between poverty and health was noted for confusing interpretations of the bald figures. The Sun, for example, focused most heavily on the link between health and wealth, whereas the Daily Mail mentioned it merely to deny that such a link was important—reflecting perhaps the different assumed readerships of those two newspapers. Such confounding factors, including inaccuracies in the underlying data and apparently unique local circumstances, were sometimes noted, somewhat defensively, by health service managers.

    Apart from managers and clinicians drawn from the “best” and “worst” authorities and trusts, the main voices quoted were John Denham, minister of health, and Peter Hawker, chairman of the BMA consultants' committee. The voice of patients, as is so often the case, was largely absent.

    Looking at the coverage overall, it is clear that taking a national approach to the performance indicators is extremely difficult. For newspaper reporters with little time to interpret information that has been created largely for professional readers, the job of presenting it to the general public is a tough one. With data like these, the most important level of public communication is probably local. At the local or even regional level it is easier to show changes over time and to focus on the performance of individual NHS organisations—looking at where they have done well and where they have scored badly. The national picture, if such a thing exists in this case, is much more complicated.

    For the national media, that creates a major dilemma. Do they oversimplify the figures to make them accessible but risk distorting the data, or do they just re-present what is given to them and risk confusing people? Most newspapers tried both. Thus, several newspapers carried simple case studies of “good” and “bad” health authorities and trusts alongside endless tables and graphs where even finding one's local hospital would be a tough job for many readers. Perhaps the Mirror ‘s decision, not to cover the story at all, was the wisest.

    Footnotes

    • The eight newspapers studied were the Daily Express, Daily Mail, Daily Telegraph, Guardian, Independent, Mirror, Sun, and Times from Friday 14 July 2000.