Article Text

Download PDFPDF

Making it happen: engaging the power of many in translating research into practice
Free
  1. Lillian S Kao1,
  2. Clifford Y Ko2,3
  1. 1 Department of Surgery, McGovern Medical School at the University of Texas Health Science Center at Houston, Houston, Texas, USA
  2. 2 Division of Research and Optimal Patient Care, American College of Surgeons, Chicago, Illinois, USA
  3. 3 Department of General Surgery, University of California Los Angeles, Los Angeles, California, USA
  1. Correspondence to Dr Lillian S Kao, Surgery, McGovern Medical School at the University of Texas Health Science Center at Houston, Houston, TX 77030, USA; Lillian.s.kao{at}uth.tmc.edu

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Randomised controlled trials (RCTs) are considered the gold standard for the rigorous evaluation of healthcare interventions because, when feasible, they generate the least biased estimate of treatment effect. However, completion of a trial is not the end game; that is, the continuum of translating science into practice does not end with publication of the RCT. Rather, active efforts must be made to translate the research findings into general practice. Indeed, the science of implementation, defined holistically as the study of ways to promote, enhance and ensure the sustained integration of research evidence into frontline practice, has shown us the complexity of achieving this goal. In the absence of active dissemination and implementation efforts, new knowledge and practices are often taken up through diffusion of innovation, and adoption is dependent primarily on attributes of the innovation. Thus, if we just focus on ‘letting it happen’ instead of ‘helping it happen’ or ‘making it happen’, then adoption will be slow—it takes an estimated 17 years for 14% of research to be integrated into routine clinical care.1 Not only does the uptake of evidence take a long time, but even when it does occur, it often does not happen completely. While there are leaks all along the research-to-practice pipeline, in this issue of the journal, Schmidtke and colleagues have focused on the post-trial implementation of research findings and of factors contributing to the uptake or lack thereof of those findings.2

Schmidtke and colleagues conducted a sequential, explanatory mixed methods study of six large, publicly funded RCTs in England.2 They noted that despite these trials being of high quality and adequately powered, only half had their results implemented into practice over a range of 6–14 years. Not surprisingly, whether or not the evidence was adopted depended on more than just the research trial methodology. The complexity that surrounds implementation of evidence into practice has long been known; a systematic review almost 20 years ago highlighted the importance of multiple implementation components including the innovation itself, the organisational context and readiness for innovation and external factors such as the role of professional societies and system-level incentives.3 While this systematic review predated the Consolidated Framework for Implementation Research (CFIR),4 a broadly used modern tool for conceptualising implementation drivers, it highlighted similar components.

In the current study, using CFIR, Schmidtke and colleagues identified facilitators and barriers in the English National Health Service to implementation of actionable findings from large, publicly funded elective surgical trials. The authors found that while RCT evidence is an important influence on practice, it is not the only influence and ‘decision-makers seem to respond to the totality of evidence such that there are often plausible reasons for not adopting the evidence of any one trial in isolation’. Such reasons included emerging evidence on safety, emerging evidence on alternative therapies, the speed at which resources could be freed up for implementation and the lack of centralised guidance.

For example, the intervention domain within CFIR includes an assessment of the evidence strength. In the current article, changes in evidence and/or the weight of pre-existing evidence were contributory to the implementation or lack thereof the RCTs’ results. Due to evidence accumulation over time, the prior probability for the effectiveness of an intervention may change even during the conduct of an RCT. More nimble strategies, such as cumulative and Bayesian meta-analyses,5 6 can be used to evaluate such accumulating evidence and to incorporate it into research designs. Specifically, Bayesian analyses utilize probability distributions which allow formal incorporation of prior evidence into analyses, frequent updating of probabilities based on accumulating evidence, and calculation of the probability of benefit using different thresholds (i.e., the probability of a benefit equal to or greater than the minimum clinically significant difference).7 Furthermore, trials can be designed from the start as hybrid effectiveness-implementation trials to hasten adoption of evidence into practice; these hybrid designs have varying emphasis on either effectiveness or implementation depending on the existing body of direct and indirect evidence, risks and benefits, complexity, and stakeholder buy-in for the clinical and implementation intervention or strategy.8

With regards to the other CFIR domains, the question can be asked: what is the most efficient strategy for optimising organisations and the individuals within those organisations on a broad scale to evaluate and adopt evidence as it arises? Based on the authors’ research and on experience, the answer may be to leverage the power of large organisations (outer setting) to create standards and to ensure adherence to those standards. Schmidtke and colleagues noted specifically the importance of official committees and professional societies such as the National Institute for Health and Care Excellence in the UK in assimilating trial evidence and in using evidence-based guidelines to support uptake of research into clinical practice.2 In the USA, the American College of Surgeons (ACS) has a similar role in incentivising individuals and organisations to ‘make things happen’, not just through guideline creation but also through formal clinical accreditation programmes on a broad scale.

Are accreditation programmes effective in the efficient implementation of evidence to achieve better care and outcomes? Limited evidence exists that accreditation improves uptake of evidence from RCTs. Furthermore, studies evaluating the impact of accreditation are limited by their observational nature, differences between accreditation programmes and heterogeneity in hospital characteristics. Nonetheless, the literature provides a starting point for assessing the strengths and weaknesses of such programmes. A 2020 systematic review suggests that although accreditation is associated with improved safety culture, process-related performance measures, efficiency and length of stay, their impact on patient-reported and clinical outcomes is less certain or not identifiable.9 Implementation outcomes were not mentioned. These findings are consistent with the fact that the standards, products and tools promoted by these programmes are often focused largely on structural resources, organisation of infrastructure and process development (inner setting, processes). Despite the potential to leverage these strengths of accreditation programmes to improve the speed and success of implementation of evidence into practice,10 11 there are still opportunities for improvement.

Accreditation programmes could have more impact on clinical outcomes by increasing focus on the speed and effectiveness of adoption of evidence from clinical trials into practice. First, accreditation programmes should encourage institutions to invest in and leverage technology to better disseminate and apply evidence to patient care. Organisations need to move from education and training to automation and computerisation as well as forcing functions, which are higher on the hierarchy of intervention effectiveness.12 Second, programmes should ensure that organisations are performing improvement activities appropriately, including adoption of evidence generated from trials. Programmes should focus less on just ‘checking the box’ to meet standards and more on improving clinical and patient-centred outcomes. A recent study of improvement projects conducted in hospitals participating in ACS accreditation programmes demonstrated that only 24% fully achieved their project goals and that achievement was correlated with better-conducted improvement efforts.13 Optimally, improvement efforts should take the local context into consideration in choosing the problem to be solved, in engaging stakeholders including patients, in selecting an intervention and in deciding on the implementation strategies. Last but not least, programmes need to ensure that organisations do not solely rely on the accreditation process to ensure safe and high-quality care.

Contextual factors can present challenges to accreditation programme efforts to facilitate implementation of trial results. Standards set by such programmes tend to be prescriptive, with the implementation piece being more flexible and context sensitive. Adoption of RCT results may need to be nuanced due to heterogeneity of treatment effect or lack of generalisability due to differences in patient populations, providers’ skills and capabilities, or resources. Not only is context important for local implementation (inner setting), but context also plays a role in how accreditation programmes function to facilitate adoption of evidence into practice in different countries (outer setting). Although national accreditation programmes may not exist or be as universal in all settings, a recent systematic review of such programmes included studies from all inhabited continents.9 In a perspective written on behalf of the International Society for Quality in Health Care, Mitchell et al provide examples of how accreditation programmes in Canada and Australia have different metrics and processes but how both help to make knowledge translation happen.11 The society also argues for more work to be done in aligning and harmonising national and international measurements and efforts around accreditation.10

Ultimately, conducting high-quality RCTs is necessary but not sufficient to change practice. Rather, processes and structure need to be developed to amass and collate evidence as it is generated, to address stakeholders’ biases and concerns throughout the research process from conception of a trial to implementation and to optimise healthcare organisations to be ready to adopt change based on high-quality evidence. Accreditation standards from external agencies alone are not the sole answer to implementation of evidence into practice. However, they do allow for alignment across a large number of institutions and encourage development of institutional infrastructure to support quality and safety efforts (including culture change). These programmes can and should be further leveraged to incorporate evidence into practice in an efficient and timely manner, to facilitate the identification of and solutions to local barriers to implementation and to promote the assessment and improvement of all important outcomes including clinical, patient centred and implementation. Thus, both evidence and experience support the need for everyone in the healthcare community from individuals to professional societies to engage with researchers to facilitate ‘making it happen’ to avoid the 17-year lag of ‘letting it happen’.

Ethics statements

Patient consent for publication

Ethics approval

Not applicable.

References

Footnotes

  • Twitter @LillianKao1

  • Contributors LSK and CYK contributed equally to the editorial.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Provenance and peer review Commissioned; internally peer reviewed.

Linked Articles