Intended for healthcare professionals

Education And Debate

Detecting differences in quality of care: the sensitivity of measures of process and outcome in treating acute myocardial infarction

BMJ 1995; 311 doi: https://doi.org/10.1136/bmj.311.7008.793 (Published 23 September 1995) Cite this as: BMJ 1995;311:793
  1. Jonathan Mant, clinical lecturer in public health medicinea,
  2. Nicholas Hicks, consultant public health physicianb
  1. a Department of Public Health and Primary Care, University of Oxford, Radcliffe Infirmary Oxford OX2 6HE
  2. b Oxfordshire Health, Headington, Oxford
  1. Correspondence to: Dr Mant.
  • Accepted 26 June 1995

The merits or otherwise of publishing hospital specific death rates are much debated. This article compares the relative sensitivity of measures of process and outcome to differences in quality of care for the hospital treatment of myocardial infarction. Aspects of hospital care that have a proved impact on mortality from myocardial infarction are identified, and the results from meta-analysis and large randomised controlled trials are used to estimate the impact that optimal use of these interventions would have on mortality in a typical district general hospital. Sample size calculations are then performed to determine how many years of data would be needed to detect significant differences between hospitals. A comparison is then made with the amount of data that would be needed to detect significant differences if information about process of care was being collected. Process measures based on the results of randomised controlled trials were found to be able to detect relevant differences between hospitals that would not be identified by comparing hospital specific mortality, which is an insensitive indicator of the quality of care.

Dissatisfaction is widespread with the mechanisms currently being used to monitor performance in the NHS. Contracts between purchaser and provider are dominated by finance and activity. Specifications that relate to the quality of care often either are unmeasurable or refer to limited aspects of care--such as waiting times--which, while relevant, do not fully reflect the quality of clinical care. Purchaser performance is monitored by a similarly barren tool, the efficiency index, which encourages increased activity per pound spent with no regard to the benefits or adverse effects of the measured activity on health.1

One response to these criticisms has been to encourage the use of routine measures of outcome, such as death rates, to compare hospital performance. A recent example of this has been the publication by the Scottish Office of death rates for NHS patients in Scotland.2 The methodological difficulties in using such outcome measures to monitor performance are well recognised.3 4 These include problems of definition--such as consistency of case finding and precision of case definition--and the effects of case mix, severity, comorbidity, and chance. These problems are acknowledged by the Scottish Office, which emphasises that differences in outcome are likely to reflect differences in patients rather than in the care they received.2 Nevertheless, it has been suggested that these difficulties are likely to be viewed as challenges rather than insurmountable barriers.4 In other words with sufficient care and effort, definitions could be standardised, and rates could be constructed to take account of differences in patients.

Such an approach has been followed in the United States, where severity adjusted mortality is viewed as a potentially useful indicator of quality of care for conditions such as myocardial infarction,5 and considerable efforts have gone into refining severity adjustment systems, particularly in intensive care.6 7 The implicit intention is that, with sufficient sophistication, any differences observed in severity adjusted mortality between two hospitals would be attributable either to chance, which can be handled by putting confidence limits around the death rates, or to genuine differences in the quality of care. Such severity adjusted rates would have attractions for clinicians, who could use them to audit their care; for purchasers and general practitioners, who could use them to set meaningful and measurable quality standards and to inform their choice of provider; and for patients, who could base their choice of provider of health care on them.

A second response has been to suggest a change of emphasis from measuring how much is done to what is done--a switch from purchasing activity to purchasing protocols.8 In this approach, high quality care is taken to be care that is consistent with the results of clinical trials. Where evidence from randomised controlled trials shows that an intervention is effective then it is relevant to monitor the process of care as this will reveal the extent to which clinical practice has taken account of the research findings. This strategy reflects concern that sometimes research findings have not been incorporated into clinical practice.9

Perhaps insufficient consideration has been given to the capacity of outcome measurement to detect real differences in performance. We compared the relative sensitivity of measures of process and measures of outcome in detecting true differences in quality of care in different hospitals. A specific example is taken--namely, the management of acute myocardial infarction. We examined (a) what aspects of hospital care have a proved impact on mortality from myocardial infarction, (b) what overall effect optimal use of these proved interventions might have on mortality from myocardial infarction, (c) what difference this would make in terms of numbers of deaths each year in a typical district general hospital, and (d) if a difference in uptake of effective interventions existed between two hospitals, how much data would be needed to detect a significant difference in hospital specific mortality and hospital specific measures of process of care.

Methods

and results ASPECTS OF HOSPITAL CARE WITH A PROVED IMPACT ON

This issue was recently addressed by a systematic review of the literature10 and a review of recent advances in cardiology.11 These reviews identified aspirin, thrombolysis, β blockers, and angiotensin converting enzyme inhibitors as being of proved benefit. The suggestion from a previous meta-analysis that intravenous magnesium, intravenous vasodilators, and anticoagulants might be of additional benefit12 has not been supported by more recent evidence from large multicentre trials.13 14 15

OVERALL EFFECT OF OPTIMAL USE OF PROVED INTERVENTIONS ON MORTALITY FROM MYOCARDIAL INFARCTION

Table I gives a summary of the effects of proved interventions in terms of relative risk reduction. Estimates were calculated of the proportion of admitted patients for whom the interventions might be indicated and the relative risk reductions were adjusted to take account of these. From these adjusted relative risk reductions an estimate of the combined relative risk reduction of all treatments was made, assuming that their effects were additive. This assumption was correct for aspirin and thrombolysis,20 but the additional effects of the other treatments may have been overestimated. An absolute risk reduction for all patients admitted with myocardial infarction was calculated assuming a mortality of 30% with no treatment (the worst 30 day mortality in the figures from the Scottish Office2). Thus the absolute risk reduction associated with optimal use of treatment identified by randomised controlled trials was up to 16.35%--163 additional lives saved per 1000 patients treated, or one life saved for every 6.1 patients admitted with acute myocardial infarction.

TABLE I

Effects of proved interventions on mortality from myocardial infarction. Values are percentages

View this table:

This absolute risk reduction of 16.35% is likely to be an overestimate: generous assumptions were made about the proportion of patients eligible for treatment and the extent to which the effects of interventions were additive; the possibility that treatment with aspirin might have been started by the general practitioner was not considered; and a very high baseline mortality (30%) was assumed. In fact, the mortality in the control groups of the trials on which the evidence for the treatments was based was much lower: 11% in the trials examining aspirin and thrombolysis,17 18 and lower still in the trials of β blockers16 and angiotensin converting enzyme inhibitors.14 Absolute risk reduction is dependent on baseline risk: if the estimate of the combined relative risk reduction is applied to a population with a risk of death without treatment of only 11%, then the corresponding absolute risk reduction is 5.1% rather than 16.35%.

EFFECT OF OPTIMAL TREATMENT ON NUMBER OF DEATHS EACH YEAR IN TYPICAL DISTRICT GENERAL HOSPITAL

In a typical district general hospital serving a population of 300000 people, 450 admissions a year might be expected for acute myocardial infarction (the crude admission rate in Oxfordshire for myocardial infarction in 1992-3 based on routine information was 1.5 per 1000 residents (J Volmink, personal communication)). If the baseline mortality was 30% optimal use of the interventions listed in table I might be expected to reduce this death rate to 14% (30%-16%), saving 74 lives a year. The figure shows the potential impact on mortality of different levels of uptake of proved interventions.

DATA NEEDED TO DETECT SIGNIFICANT DIFFERENCE IN MORTALITY AND IN MEASURES OF PROCESS OF CARE BETWEEN HOSPITALS

Let us consider two hypothetical district general hospitals, A and B. The hospitals both serve catchment populations of 300000 people. These populations are identical demographically in terms of age, social class, and ethnic mix. The general practitioners in the two areas refer exactly the same types of patient to hospital. The availability of ambulances in the two areas is the same. The two hospitals use identical case definitions of myocardial infarction and carry out the same diagnostic tests in exactly the same way. The admission rate for myocardial infarction is identical in the two hospitals (1.5 per 1000 population--that is, 450 admissions a year). How likely is it that differences in the quality of care are reflected in significant differences in the death rates or in differences in measures of process of care? If hospital A makes no use of effective interventions and its mortality is 30% and hospital B's mortality varies between 21% and 29% then sample size calculations could be performed to determine the amount of data that would be needed to detect differences in mortality (table II). The uptake of the proved interventions that would be needed for hospital B to achieve its lower mortality can be calculated if a mortality of 30% is assumed to reflect no use of an intervention and a mortality of 14% to reflect 100% use (figure). Sample size calculations can then be performed to determine how much data would be needed to detect these differences in the process of care. Sample sizes shown in table II were calculated with the software Epi Info 6.01 with 80% power and at a (two sided) significance level of 5%.

TABLE II

Sample size needed to detect difference between hospital A and hospital B in treatment of myocardial infarction

View this table:
Figure1

Potential impact of optimal uptake of proved interventions on mortality from myocardial infarction in a district general hospital with mortality of 30% with no treatment

Figure2

Measures of process may be more sensitive than outcome measures in detecting differences in quality of care between hospitals

If hospital B's mortality is lower than hospital A's, how much lower does it have to be for a significant difference to be detectable? Table II shows that if each hospital had 1350 patients in three years--the period over which the Scottish data were published--and hospital A's mortality was 30% then hospital B's mortality would need to be 25% or lower for the difference to be significant. If 30% mortality reflects no use of the effective interventions listed in table I and 25% mortality reflects 31% use of such interventions then such a difference in the process of care would be detected as significantly different after the care of just 27 patients in each hospital had been monitored. If data were collected for only one year then the mortality in hospital B would have to be 21% or lower to be significantly different from that in hospital A. To detect the differences in process of care that would have led to such a difference in mortality would have required less than two weeks (12 patients in each hospital) of data collection.

One consideration in sample size calculations is the differences in mortality that should be regarded as clinically important. Table II presents these differences in two ways: as a relative difference and in terms of number of lives saved each year. Thus a change in mortality from 30% to 25% can be expressed as a relative reduction of 17% (about the same as the effect of fibrinolysis (see table I)) or as the equivalent of 22 lives saved each year. It is probably impractical, on the basis of table II, to detect smaller differences in mortality, but it is feasible to detect differences in the process of care that would lead to smaller differences in mortality. At the most extreme, for a 3% relative reduction in mortality to be detected, data would need to be collected for 73 years. In contrast, the difference in process of care that would lead to this difference in mortality would be apparent after four months' collection of data. Is such a difference clinically important? On the basis that one life would be saved per 100 patients treated, it probably is clinically important, in that this is a more favourable absolute effect of treatment than was achieved in the ISIS 1 trial, which estimated that 200 patients would have to be treated with a β blocker in the acute stages of myocardial infarction for one life to be saved.16

Discussion

The analysis suggests that, even with data aggregated over three years with a perfect system of severity adjustment and identical case ascertainment and definition, disease specific mortality is an insensitive tool with which to compare the quality of care among hospitals. In contrast, relatively short audits of process of care could identify relevant differences among hospitals.

Our analysis is crude in that we have made several assumptions to produce an estimate of the combined impact of the known effective interventions on mortality in hospital from myocardial infarction. These assumptions, however, will have tended to exaggerate the impact of these interventions on mortality and thus have overestimated the apparent sensitivity of hospital specific mortality as a measure of quality. A more serious problem with the analysis is that it restricts itself to considering aspects of care that have been shown to have an effect on mortality. There are other features of hospital care that have an impact, such as the skill of the cardiac arrest team and the ability of the medical and nursing staff to handle the complications of myocardial infarction. If such features have an important effect on mortality then the relative utility of monitoring mortality will be greater.

Focusing on process has other advantages. If differences are shown then the area for action is clear. Conversely, if a genuine difference in mortality is found between two units then to improve the care in the unit with the worse results it would be necessary to identify what it is that is different about the processes of care that led to the difference in outcome. Measures of process are easier to interpret: the more people (who do not have a genuine contraindication) who are given streptokinase after a myocardial infarction the better. On the other hand, however good the system of severity adjustment is, plausible explanations that have nothing to do with the quality of hospital care can always be given about mortality differences between units. Monitoring process does, however, have limitations. It is inappropriate if no evidence exists that a process leads to better outcome and becomes unwieldy if there many aspects of process that have been shown to affect outcome.

Although the analysis has been simplistic, it has made the same sort of assumptions that would need to go into a power calculation to support a proposal for a randomised controlled trial. In the same way that decisions to fund trials are based on such calculations, sensitivity to change should be taken into account when choosing between indicators to monitor the quality of care. For this reason alone, monitoring severity adjusted, condition specific mortality to assess performance may not represent good value for money. On the other hand, if one of the aims of monitoring hospitals is to promote clinical effectiveness,21 then measuring aspects of process of care that have been shown by randomised controlled trials to influence outcome is an attractive alternative.

Acknowledgments

We thank Pat Yudkin, Martin Vessey, and Susanna Graham-Jones for their helpful comments on drafts of this paper.

References