Intended for healthcare professionals

Research Methods & Reporting

Taking healthcare interventions from trial to practice

BMJ 2010; 341 doi: https://doi.org/10.1136/bmj.c3852 (Published 13 August 2010) Cite this as: BMJ 2010;341:c3852
  1. Paul Glasziou, professor1,
  2. Iain Chalmers, coordinator (James Lind Initiative)2,
  3. Douglas G Altman, professor of statistics in medicine3,
  4. Hilda Bastian, editor in chief4,
  5. Isabelle Boutron, statistician5,
  6. Anne Brice, information specialist1,
  7. Gro Jamtvedt, executive director6,
  8. Andrew Farmer, professor of general practice1,
  9. Davina Ghersi, team leader7,
  10. Trish Groves, deputy editor8,
  11. Carl Heneghan, director, Centre for Evidence Based Medicine 1,
  12. Sophie Hill, researcher9,
  13. Simon Lewin, researcher6,
  14. Susan Michie, professor of health psychology10,
  15. Rafael Perera, researcher1,
  16. Valerie Pomeroy, professor of neurorehabilitation11,
  17. Julie Tilson, assistant professor12,
  18. Sasha Shepperd, researcher1,
  19. John W Williams, professor of medicine and psychiatry13
  1. 1 Department of Public Health and Primary Care, University of Oxford, Oxford OX3 7LF
  2. 2James Lind Initiative, Oxford OX2 7LG
  3. 3Centre for Statistics in Medicine, Oxford OX2 6UD
  4. 4German Institute for Quality and Efficiency in Health Care, Cologne, Germany
  5. 5INSERM, Université Paris, Paris, France
  6. 6Norwegian Knowledge Centre for the Health Services, Oslo, Norway
  7. 7Department of Research Policy and Cooperation, World Health Organization, Geneva
  8. 8BMJ, London
  9. 9Centre for Health Communication and Participation, Australian Institute for Primary Care, La Trobe University, Victoria, Australia
  10. 10Division of Psychology and Language Sciences, University College London, London WC1E 7HB
  11. 11Health and Social Sciences Research Institute, Faculty of Health, University of East Anglia, Norwich
  12. 12Division of Biokinesiology and Physical Therapy, University of Southern California, Los Angeles, CA, USA
  13. 13Duke Evidence-based Practice Center, Duke University and Durham VA Medical Center Durham, NC, USA
  1. Correspondence to: P Glasziou paul.glasziou{at}dphpc.ox.ac.uk
  • Accepted 31 May 2010

The results of thousands of trials are never acted on because their published reports do not describe the interventions in enough detail. How can we improve the reporting?

Much healthcare research is currently wasted because its findings are unusable.1 Published reports of intervention trials often focus on the results and fail to describe interventions adequately. For example, a review of 80 studies selected for the journal Evidence Based Medicine as both valid and important for clinical practice found that clinicians could replicate the intervention in only half the studies.2 Interventions may be used incorrectly or not at all if there is inadequate detail in the trial protocol, on the conduct of the trial, in systematic reviews and guidelines, and finally during implementation (fig 1). This is an unnecessary but remediable waste, as we discuss below.

Figure1

Fig 1 Distortion or loss of information about the true intervention can occur at each of four stages and the intervention may not reach practice without good reporting and trial fidelity (shaded boxes)

Study protocol

The methods section in a protocol should provide a description of the intervention(s) (whether active, usual practice, or placebo) that is sufficiently detailed to enable people with appropriate expertise to reproduce them. This should include:

  • What were the “contents,” including all constituent components, materials, and resources and their quality

  • Who delivered the intervention, including their expertise, additional training, and support

  • Where the intervention was delivered (the setting)

  • How and when the intervention was delivered: the dose, the schedule (intensity, frequency, duration), and interaction

  • The degree of flexibility permissible, including options and decision points.3

This list is readily adaptable for interventions beyond clinical treatments and encounters—for example, to health systems and other complex interventions. Attention should be paid to the different meanings that terms such as counselling or physical therapy may have in different settings.

Space constraints in trial registration databases and scientific journals may restrict full reporting of interventions. Potential solutions include complying with WHO requirements for reporting interventions, adding web hyperlinks to trial registration records and other documentation,4 as recommended by signatories to the Ottawa statement (ottawagroup.ohri.ca) and on other websites where protocols are reported.

The development and description of treatment schedules may take considerable planning, particularly for non-drug interventions. Figure 2 shows the development of conventional physical therapy interventions as a precursor to studies evaluating novel interventions for recovery of movement after stroke.5 Semistructured interviews and focus groups were used to capture the content of conventional physical therapy interventions.5

Figure2

Fig 2 Illustration of methods to develop a physical therapy treatment schedule5

Study fidelity: planned versus actual treatment

Trial reports should describe the extent to which the intervention, as delivered, was consistent with the protocol. Fidelity can have several dimensions: whether components are delivered as prescribed (adherence); the amount of exposure to the content; the extent to which the delivery was aligned with the underpinning theory (quality); and the degree to which participants engaged in6 or modified the intervention. Poor fidelity will lead to unclear or misleading conclusions.

Despite its importance, fidelity of the intervention is often not reported: only 25 of 80 (31%) prevention studies reported evidence of fidelity,6 and only 69 of 192 (36%) drug studies documented assessment of adherence to treatment—the simplest measure of fidelity.7

Assessment of fidelity may require qualitative and quantitative methods. For example, a trial comparing the effect of two diagnostic tests for malaria delivered inconclusive results, but a parallel qualitative study showed that in areas of high malaria prevalence clinicians treated malaria regardless of the random allocation.8 Likewise, in a large trial of an intervention to increase physical activity in sedentary adults, coding of session audiotapes showed that only 42% of intervention techniques were delivered as specified in the protocol.9

Interventions can involve several actors. Though clinicians may deliver the intervention, participants may not adhere to it. Hence the role of both clinician and participant needs to be described. An example of good practice is the DiGEM trial of self monitoring of blood glucose concentrations,10 for which the nurse training manual describes the intervention and timing of delivery in detail, with the intended effect and the required level of knowledge, skills, and behaviour for the research nurse and the person with diabetes.

Measures to improve and assess fidelity at the trial protocol stage include:

  • Designing the intervention using a recognised theoretical framework

  • Producing a manual or written instructions for the interventions

  • Training all study members responsible for protocol delivery

  • Observing delivery

  • Using checklists to ensure competency and standardisation of delivery

  • Providing support material for trial participants that promotes adherence.

Any drift away from fidelity during the trial should be reported when the study is published.

Publication of single studies

The Uniform Requirements for Manuscripts Submitted to Biomedical Journals have for many years advised authors to “Describe statistical methods with enough detail to enable a knowledgeable reader with access to the original data to verify the reported results.” (www.icmje.org)

It is unclear why this suggestion has not been extended to all aspects of the research methods. The need to provide detailed information about interventions has been recognised in several guidelines for reporting research, the best known of which is the CONSORT (Consolidated Standards of Reporting Trials) statement.11 And the 2010 update of the CONSORT statement requires authors to describe “the interventions for each group with sufficient details to allow replication, including how and when they were actually administered.”

Despite the CONSORT guidelines and advice on good reporting of interventions, reporting is currently poor (table). For example, only 13% of papers on back pain reported reproducible interventions.12 A review of 158 reports of randomised controlled trials in surgery showed that important components of the intervention (such as anaesthesia protocol or perioperative management) were reported in less than half.13 Furthermore, only 41% reported the intervention actually administered as opposed to the intervention intended in the protocol.13

Summary of studies that assessed whether interventions in published trial reports could be replicated

View this table:

Since 2001, extensions to CONSORT have focused on specific types of intervention with detailed recommendations on reporting interventions. For example, the extension to trials of non-drug treatments15 recommended the reporting of precise details of both the experimental treatment and comparator, including a description of the different components of the interventions and, when applicable, descriptions of the procedure for tailoring the interventions to individual participants; details of how the interventions were standardised, and details of how adherence to the protocol of care providers was assessed or enhanced. The extensions for other types of study have not directly tackled ability to replicate interventions, although the WIDER (Workgroup for Intervention Development and Evaluation Research) group of journal editors has given recommendations to ensure behavioural interventions can be replicated.16

Adequate reporting is difficult and needs greater attention from authors, peer reviewers, and journals. Trials of complex interventions particularly may benefit from innovative communication methods such as graphic techniques,17 video, and audio. For example, videos are available to guide use of the WHO safe surgery checklist.18

Synthesis of evidence and systematic reviews

Interventions will usually vary across trials in a systematic review, reflecting differing inclusion criteria and specific aspects of the intervention. Even for relatively simple interventions, such as antibiotics for acute sinusitis, the specific antibiotic, dose, duration, and timing may vary. For more complex interventions, such as strategies to implement clinical practice guidelines, heterogeneity is greater.19

For the review user a central question is: “Which intervention should we use when there are multiple versions in a review?” For example, a review reported that exercise for patients with osteoarthritis of the knee can reduce pain and improve function.20 However, almost all studies in the review used different types and doses of exercise. If a review shows a collective intervention to be effective, the user is challenged to determine which configuration, elements, or dose of the intervention should be implemented for their patients or setting. Methods to guide this are poorly developed. (Note: identical problems occur in guideline development.)

During synthesis of evidence, the intervention description may be modified at several stages:

  • The review protocol—An intervention may be inadequately conceptualised at the protocol stage

  • Conducting the review—Authors may not consider the features of an intervention that could affect implementation and instead focus on classifying interventions for exploring heterogeneity

  • Dissemination—When the review enters the media, the description of the intervention may be altered

This complicates decisions about which configuration of the intervention to implement.

With rare exceptions21 reviewers seldom attempt to improve descriptions of interventions. Conceptual frameworks may facilitate the classification and description of interventions. For example, a review of “audit and feedback” classified interventions according to intensity and provided examples to illustrate different intensities.22 This categorisation was intuitive and not based on theory. Using theory may lead to conceptually more coherent categories and therefore more meaningful results.

Mapping the components of an intervention

Specifying the components within interventions in a review can help identify similarities and differences, allowing the effective “ingredients” to be defined. For example, Rubenstein used cross-case qualitative analysis to assess whether specific design features of collaborative care interventions were associated with greater effect on depression compared with usual care.23 This qualitative analysis looked closely at features that occurred in studies with greater effects, generating hypotheses about the most important components of the intervention. Core components may also be identified by surveying trialists. For example, Langhorne et al used all trials of “stroke units” to identify key components then surveyed the trialists’ collaboration to find out which components they had used and derive a composite intervention.21

Taxonomies

One method of identifying the active ingredient(s) of an intervention is to systematically specify them—and control comparison conditions—using standardised taxonomies and then use meta-regression to show effects hidden by more conventional methods of synthesising evidence.24 25 Taxonomies help ensure a planned approach to analysis, particularly when heterogeneity prevents meta-analysis. Taxonomies facilitate the process of accumulating knowledge across heterogeneous studies, making it easier to update reviews and identify gaps. Mechanisms underlying an intervention can be investigated by linking active ingredients to hypothesised causal mechanisms (theory) through approaches such as “realist synthesis” or consensus among content experts.

Using the study

Unless there is clarity about what interventions involve, patients and health professionals cannot ensure they receive beneficial interventions or avoid unhelpful or harmful interventions.

Patients, practitioners, and policy makers learn about interventions directly from trials and systematic reviews or, more commonly, from intermediaries and secondary sources (websites, advice centres, media, clinical practice guidelines, librarians) or practitioners. The details of evaluated interventions should be readily available in the public domain. The minimal elements of knowledge that patients (or the providers of information to patients) need about the intervention are who, what, when, and how, as we set out above. Additionally, clinicians may need information about skills, equipment, or referral sources to provide effective treatment. The box gives our proposals to increase the usefulness of research reports (box).

Actions to improve usefulness of research reports

  • When planning trials, researchers should work with end users to develop and deliver the interventions. Clear specifications of the components of the intervention should be planned and reported

  • Researchers and funders should improve the description of interventions (including “usual practice”) in protocols and pay attention to the fidelity of an intervention

  • A stable “intervention bank” should be established (eg, videos, manuals, and fidelity tools linked to trial registration number) to overcome the problem of word restrictions in journals, etc

  • Systematic reviews should include a summary table describing study interventions, with links to trial publications and other resources relevant to replicating the interventions

  • The reporting standards for interventions in trials (CONSORT, etc) and systematic reviews (PRISMA26) should be improved and standardised (specific checklists)

Notes

Cite this as: BMJ 2010;341:c3852

Footnotes

  • We thank Mike Clarke and the reviewers for helpful comments and Mary Hodgkinson for organising meetings. The costs of the meeting were in part covered by PG’s NIHR fellowship.

  • Contributors and sources: PG organised and IC chaired a two day consensus meeting of the authors to discuss the problems of reporting trial interventions, develop a guide on describing interventions throughout the research process, and prioritise recommendations to reduce information distortion and loss. All authors contributed to discussions and writing of the paper.

  • Competing interests: All authors have completed the unified competing interest form at www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declare no support from any organisation for the submitted work and no financial relationships with any organisation that might have an interest in the submitted work in the previous three years; TG is an editor at the BMJ but was not involved in the peer review process. DG is an employee of the World Health Organization and works on trial registration. DA is an executive member of the EQUATOR network.

  • Provenance and peer review: Not commissioned; externally peer reviewed.

References