Intended for healthcare professionals

Head To Head

Should we use large scale healthcare interventions without clear evidence that benefits outweigh costs and harms? Yes

BMJ 2008; 336 doi: https://doi.org/10.1136/bmj.a145 (Published 05 June 2008) Cite this as: BMJ 2008;336:1276
  1. Bernard Crump, chief executive officer
  1. 1NHS Institute for Innovation and Improvement, Coventry CV4 7AL
  1. bernard.crump{at}institute.nhs.uk

Obtaining definitive evidence on the effects of large scale interventions can be difficult. Bernard Crump believes that implementation with careful monitoring is justified but Seth Landefeld and colleagues (doi: 10.1136/bmj.a144) argue that acting without proof of net benefit is both costly and potentially damaging to health

Large scale health intervention covers a wide range of circumstances, including the use of a new drug or therapeutic procedure. In this case we have developed over the past 50 years a widely accepted understanding of the nature of the evidence that would lead to a consensus about an intervention’s use, while not underestimating the challenge of acquiring it. Other interventions are much more complex; they are about the behaviour of people and systems, and it does no service to the public to apply only the yardsticks we have developed for narrower biomedical interventions. Although we should be equally rigorous in our evaluation, we need to learn from other scientific sectors to broaden our understanding of evidence.

Imperfect evidence

In the NHS the National Institute for Health and Clinical Excellence (NICE) gives guidance on questions such as “Should the NHS make available gemcitabine for pancreatic cancer?” or “Is there adequate information about the safety and efficacy of laparoscopic repair of abdominal aortic aneurysm?”1 2 using a rigorous approach to integrate the best available evidence on efficacy and where available, cost effectiveness. In these, as in countless other examples, NICE offered guidance about how to proceed in circumstances where benefits are not yet clear. We also know that evidence may change.3

In seeking to clarify the role of primary, secondary, and tertiary care for familial breast cancer,4 a more complex question, NICE made 105 recommendations. For none of them was it able to draw on evidence from randomised controlled trials or even a single controlled or quasi-experimental study. Sixteen of the recommendations rested on non-experimental descriptive studies and the remainder on expert opinion. Should NICE have proposed that no service be offered until there was better evidence (leaving practitioners to face the public without guidance) or make the recommendations that it did?

Interventions designed to improve the quality or safety of care are more complex still. They involve individuals, organisations, and systems. Examples might include steps taken by five primary care trusts to shift care closer to patients’ homes5 or the interventions instituted by an NHS trust to reduce the hospital standardised mortality ratio.6 They present formidable challenges to the experimental evaluator. For example, it is rarely possible to contemplate blinding of investigator or participant. Randomisation, where it can occur, often needs to be at cluster not individual level.

Mechanisms of change

Moreover, learning from other sectors teaches that successful improvement programmes will build in real time feedback on intermediate outcomes and will allow for the adjustment of the intervention as the implementation takes place.7 At the heart of many quality and safety interventions is the need to stimulate and motivate change, and continuous feedback is central to this.

More fundamentally, the nature of the experimental approach to evaluating such programmes is problematic. Experimental evaluation is based on an approach to the establishment of causation that can be described as successionist.8 The changes in outcome that occur in the experimental and control group are all that matters and are observed externally. The context in which these changes occur is relevant only in so far as they can confirm the adequacy of the randomisation process.

This failure to take account of context leads to at least two problems. Where evaluations lead to the conclusion that an intervention works, it is not known why it has worked. They are also prone to lead to the conclusion that an intervention does not work when another perspective is that the impact is place and context specific.9

An alternative approach to studying causation is the generative approach, which requires an understanding of the mechanisms causing change, takes account of the internal factors as well as the external, and requires a deep appreciation of the context. Social scientists have developed models of evaluation based on the generative approach, often referred to as scientific realism.10 Such evaluations are rigorous and exacting, using a combination of quantitative and qualitative approaches with an emphasis on capturing evidence about the context in which interventions take place.

For example, researchers from University College London and the RAND Corporation studied nine healthcare institutions in the US and Europe with a strong reputation for quality improvement.11 Using interviews, narrative accounts, observation, and document analysis the team created a novel framework to describe the organisational context that distinguished these organisations and their most successful teams. The use of such approaches in the study of programmes to improve safety and quality deserves exploration.

Meanwhile, those embarking on an improvement initiative should be clear about how they will tell if a change is an improvement, should monitor the effect of their interventions (including costs) using sound methods for measuring quality, and should capture information about the context in which their intervention is taking place. This is analogous to the approach NICE recommends for new interventions when more evidence is needed.

Footnotes

References