Intended for healthcare professionals

Head To Head

Should we use large scale healthcare interventions without clear evidence that benefits outweigh costs and harms? No

BMJ 2008; 336 doi: https://doi.org/10.1136/bmj.a144 (Published 05 June 2008) Cite this as: BMJ 2008;336:1277
  1. C Seth Landefeld, professor12,
  2. Kaveh G Shojania, assistant professor3,
  3. Andrew D Auerbach, associate professor of medicine1
  1. 1University of California San Francisco, 3333 California Street, San Francisco, CA 94118, USA
  2. 2San Francisco VA Medical Center, San Francisco
  3. 3Ottawa Health Research Institute, Ottawa, Canada
  1. Correspondence to: C S Landefeld sethl@medicine.ucsf.edu

Obtaining definitive evidence on the effects of large scale interventions can be difficult. Bernard Crump (doi: 10.1136/bmj.a145) believes that implementation with careful monitoring is justified but Seth Landefeld and colleagues argue that acting without proof is both costly and potentially damaging to health

Large scale healthcare interventions are likely to improve the health of the public if the evidence clearly shows that the benefits outweigh harms and costs. Often, however, the evidence is not compelling, and well intended interventions may fail to improve health, or may even cause harm, while costing dearly. Moreover, when a large scale intervention is implemented without compelling evidence, wishful thinking may replace careful evaluation, and an unproved innovation may become an enduring but possibly harmful standard of care. Such interventions should be implemented, therefore, only when the evidence shows that expected benefits outweigh harms and costs and only when the effects of implementation will be evaluated systematically.1

Large scale healthcare interventions aim to influence clinical evaluation, treatment, or care of a large group of people. Some interventions are coercive, such as the restrictions on working hours for resident physicians enforced by the US Accreditation Council for Graduate Medical Education. Other interventions, such as pay for performance, use explicit incentives, whereas other initiatives (such as the US Institute for Healthcare Improvement’s 5 Million Lives Campaign to reduce preventable adverse events) are voluntary.

Evaluating evidence

Two questions underlie this debate. Firstly, how can we assess evidence to build knowledge about improvement? The assessment of evidence about a healthcare intervention is, in principle, straightforward, and the methods are well developed.2 3 4 In practice, however, challenging issues arise. Can knowledge of the effect of a large scale intervention be built from observational studies as well as experimental trials? Of course: many study designs can provide robust evidence.5 However, the debacle over hormone replacement therapy reminds us that reliance on even well done observational studies can mislead both policy makers and clinicians.6 Is the evaluation of costs and harms sufficient? Unexpected harms of interventions are understudied, possibly common, and often difficult to discern.7 8 How should information about the efficacy of an intervention be put into practice, and how should practice be revised as new evidence emerges? Carefully, because the effects of an intervention may vary among patients, providers, and medical care environments, which often differ from those in studies that established efficacy.9 10 For example, policies to promote potentially effective screening for breast or colorectal cancer have proved harmful when applied to patients unlikely to benefit.11 12 New evidence often emerges after an intervention has been put into practice, sometimes contradicting earlier judgments about its efficacy, as shown by the growing literature about rapid response teams.13 Questions about evidence and knowledge are fundamental to decisions about whether to implement a large scale intervention, and they merit thoughtful consideration by independent judges.

Assessing balance

Secondly, how should we weigh the benefits, costs, and harms of a large scale intervention? Decisions about whether to implement an intervention should be informed by optimistic and pessimistic estimates of its net benefit—that is, best case and worst case scenarios.

If the best case is that an intervention will not have net benefit, the intervention is a non-starter. For example, a policy to reduce falls in hospital by restraining all older patients is a non-starter because it would cause harms that outweigh benefits, even in the best case.14

If the worst case is that an intervention will have net benefit, and its costs are acceptable, then it is a “no brainer” to implement. For example, washing hands before and after examining each patient will reduce nosocomial infection at low cost, even in the worst case.15 (Of course, a large scale intervention to put a clearly beneficial behaviour into universal practice is not simple and merits due consideration of the available evidence.)

If the best case is positive but the worst case is not, then more data are needed. For example, the effects of rules restricting residents’ working hours were unknown when they were implemented.16 In the best case, the restrictions might benefit patients by reducing medical errors related to fatigue. In the worst case, they might harm patients by decreasing access to care, disrupting continuity of care, and increasing errors related to miscommunication, as early evidence suggested.17 Also, working restrictions are costly. It remains unclear whether rules on working hours can be implemented in such a way that they have net benefit.18 If the net effects of an intervention are uncertain, large scale implementation of the intervention is dubious, especially without a rigorous plan to determine its effects.

Our patients will predictably benefit from large scale interventions only when the benefits outweigh harms and costs. We can promote such interventions in three ways. Firstly, we must commit ourselves to the goal that we will implement large scale interventions only when it is clear that the benefits outweigh costs and harms. Secondly, we must build the knowledge and methods to achieve this goal. Finally, we must raise the priority for this work so it is supported. The costs and harms of delay—whether by doing nothing or by implementing large scale interventions of unknown benefit—are too high.

Footnotes

References