Table 1

The Bradford Hill Criteria, epidemiological meaning, a translation for quality improvement and implementation (QI&I) in italics and a brief description of QI&I methods that can provide evidence and advice to practitioners

CriterionContribution of methods used in QI&I and advice to  practitioners
1. Strength of association
  • What is the size of the effect, that is, what is the relative risk or OR?

  • What is the size of the effect, that is, what is the efficacy of the intervention on outcomes of interest?

Statistical process control (SPC) charts enable identification of special cause of variation that is unlikely to be due to chance alone, thereby providing statistical evidence of effect and its magnitude (measures of relative risk, number needed to treat and estimates of the magnitude of attributable effect are useful measures of effect size).
  • State the magnitude of change and its clinical or systems meaning

  • Use SPC charts, with a clear rule set and control limits, to determine changes in process in advance, to maintain objectivity and to avoid fishing for a result

2. Consistency of association
  • Are repeated observations from different places, at different times, with differing methods, by different researchers, under different circumstances in agreement?

  • Does repeated application of intervention provide similar results in different contexts?

Existing evidence contributes to programme theory and implementation plan, which can be used to demonstrate consistent impact, for example, through scaling up.
  • Keep track of intervention–outcome data as scale-up occurs to increase knowledge about causal consistency between intervention and outcome in different settings

  • Analyse contextual barriers and enablers; make and note amendments to implementation plan

3. Specificity of association
  • Is the outcome unique to the exposure?

  • Could anything else have produced the observed result?

Combination of implementation design (eg, step-wedge), SPC charts and other analyses of change can inform the specificity of outcomes in relation to the intervention and planned implementation activities.
  • Establish a comparison or control group, where possible, to identify secular trends (ie, explore the counterfactual: what might have occurred without the intervention?)

  • Ensure that the design and evaluation plan mitigate potential bias and confounding

  • Explore what alternative mechanisms exist that might obtain the effect

4. Temporality
  • Does exposure occur before the outcome?

  • Does intervention activity occur before outcome?

SPC charts determine relationships between timing of intervention and observed special cause, and Plan-Do-Study-Act (PDSA) cycles document QI&I activity.
  • Annotate SPC charts with intervention events; include annotations of relevant external events apart from the planned intervention that could have influenced the outcome; make clear how and when special cause is detected and handled

  • Ensure sufficient baseline data points to understand variation inherent in system

  • Specify the predicted time period necessary to implement the intervention before improvement is expected to occur

5. Biological gradient
  • As more of the stimulus is added, is the response increased?

  • Is more effect observed with more intervention, or higher fidelity of intervention?

A combination of programme theory, implementation design and plan (eg, step-wedge), SPC charts, and other analyses for change can examine the extent outcomes improve in relation to the intervention ‘dose’ in planned programme activities.
  • Demonstrate relationship between dose of interventions and outcome using SPC charts or other analyses to display effect size

  • In implementation plan design, consider the activities needed to deliver the ‘dose’

6. Plausibility
  • Does the postulated causal relationship make sense?

  • Can the intervention explain the outcome?

Programme theory and process maps should incorporate the plausibility that the intervention is likely to impact the outcome of interest. The implementation plan should consider the amount of intervention required to obtain a response, and statistical evaluations should reflect the degree of confidence in cause and effect.
  • Draw on existing theories and models (eg, behaviour sciences, implementation research) to determine the plausibility of the postulated QI&I initiative

  • Observe how an intervention works in practice and link to PDSA cycles for testing theories; update programme logic in light of learning

7. Coherence
  • Is the association compatible with existing theory and knowledge?

  • As above

Existing literature that demonstrates evidence for case (using knowledge from across disciplines) builds coherence.
  • Conduct review of current knowledge (including grey literature and experience), assessing evidence regarding the effectiveness of implementation strategies

8. Analogy
  • Does the causal relationship conform to a previously described relationship?

  • Are there other interventions in different settings that are similar?

Learning from other improvers and researchers; for instance a similar intervention (eg, ‘care bundle’) in one setting has analogy to another.
  • Find analogies in existing literature to increase confidence that similar approaches will work elsewhere even if the specific intervention or implementation strategy differs

9. Experiment
  • Does controlled manipulation of the exposure variable change the outcome?

  • Does modification of the intervention provide difference in outcome?

Programme theory highlights areas for implementation activity. The implementation design should mitigate confounding and bias where possible. PDSA cycles can be used to experiment, recognising that multiple changes may be required.
  • Test changes using iterative PDSAs along the theoretical causal pathway to build confidence in cause–effect

  • Document predictions, what changes were made and why; reflect on accuracy of predictions and determine new information gained; update programme theory