Article Text
Statistics from Altmetric.com
Cross et al’s important new article identifies the ‘poor translation of clinical practice guidelines … into clinical practice’ and the need to understand how to close the gap between the production of new evidence and its use in clinical settings.1 They analyse 16 studies that focus on how ‘knowledge brokers’ might help, finding a decidedly mixed picture. While knowledge brokerage involves sensible measures—to generate and share guidelines, engage with relevant stakeholders and build greater capacity to share and adopt guidelines—their effectiveness is only clear in half of the relevant randomised controlled trials (RCTs) (although higher across all studies). Given the small number of relevant studies that the authors found (including relatively few on ‘linkage agent roles’), and the high uncertainty that remains, it is no surprise that they conclude with a call for more research.
In this editorial, we explore what that research would look like. We show that a lot of the groundwork has already been done, contributing to a wider interdisciplinary field, variously dubbed as ‘research-policy engagement’, ‘impact research’, ‘research on research use’ or ‘transforming the use of research evidence’,2 and exhibiting overlapping concerns with those of implementation science.3 The common thread is a focus on: what it means to produce high-quality and policy-relevant knowledge; how and why policymakers and practitioners use that knowledge; and the impact that this use of knowledge has on policy and practice (although some use the terms ‘evidence’ or ‘research’ in different contexts). Overall, this interdisciplinary field combines a focus on practical strategies (eg, with reference to ‘what works’) and debate on, for example, how to determine the quality of knowledge or assess how well it is used (such as by policymakers or practitioners who also draw on experiential knowledge).
Drawing together learning from across this field can put us in a stronger position to consider how best to translate evidence into policy and practice. For example, researchers often consider the challenges of evidence use in policy and practice separately, despite obvious connections between what happens in policy processes and in practice settings (and vice versa).
This wider field is characterised by the following features that correspond to four key points raised by Cross et al.1 First, many scholars identify—from their perspective—a worrying gap between the abundance of high-quality research evidence and its sparing use in policy and practice, with barriers including: limited access to research, the lack of timely findings, the mismatch between researcher and research user timelines, low research user skills and the costs of better engagement.4
Second, they often recommend similarly sensible-sounding measures to help close that gap, including: improve the clarity and dissemination of research, develop better relationships with users of research (such as via knowledge networks or regular workshops), employ knowledge brokers to connect those who share and use evidence and build the capacity of research users to understand new knowledge.5 6 These suggestions come with a wide range of names to attach to individual roles or activities, including policy or research ‘entrepreneur’ or ‘champion’. For example, Cross et al 1 identify ‘local opinion leader’, ‘clinical champion’, ‘change champion’, ‘agents of change’, ‘academic detailer’ and ‘knowledge translation broker’, while the National Institute for Health and Care Excellence employs ‘implementation teams’ to support the uptake of clinical guidelines in health and social care practice. In other words, ‘knowledge broker’ has become a broad shorthand term used in reviews, and not necessarily used by participants in each initiative (which add uncertainty about whether or not they are describing the same thing).
Third, a small proportion of such initiatives is accompanied by a systematic evaluation of their impact. Oliver et al 7 find a ‘huge expansion in research-policy engagement initiatives’ to disseminate and communicate research, respond to requests for evidence, facilitate access, build user capacity or a wider infrastructure, foster partnerships, foster leadership and reward impact. However, few are evaluated (and RCTs are very rare) and rarely draw on other studies of engagement which might provide, for instance, theoretical or contextual knowledge about wider policy processes.
Fourth, a review of these evaluations produces a mixed picture of impact, including: (a) ‘internal evaluations’ of dissemination suggest that stakeholders may value the evidence but there remains ‘limited evidence of effect on policy or practice’; (b) participants describe a general benefit of networks or partnerships without describing their tangible effect on policy or practice; and (c) the building of capacity tends to lead primarily to more research (or benefits to individual researchers) than research impact, with research users generally unable to translate new (research-heavy) skills into practice.7
What would more research on evidence use look like?
From this wider literature, we identify four key points regarding the need for more research on roles such as knowledge brokerage. First, as Cross et al 1 note, there is clearly a need to produce more studies with sophisticated methods—and a clear rationale—to make sure that we can pinpoint ‘what works’. Part of the reason that we do not know what solutions ‘work’ is that we do not first agree about what problem we are trying to solve. Having clear goals—informing a theory of change with defined outcomes—is essential if robust evaluations are going to be informative. Evaluations may be designed as RCTs, but—unless our aims and expectations are clear—we will not know the extent to which experimental trial designs are able to capture all the learning necessary to allow implementation of successful interventions across settings.
Second, if clinical practice has distinctive but not unique elements, we encourage greater learning from outside this narrow sphere of activity. Within health studies, there is a wealth of approaches around knowledge brokerage,8 long-term collaborations9 and responsive research networks.10 Studies in education and environmental sciences have demonstrated the importance of a well-designed research infrastructure11 to sustain meaningful collaboration.12 International development studies have shown how to relate research engagement to commonly held values to support stakeholder engagement.13 14 Further, Supplee et al’s15 comparison of the ‘methods, approaches, and evolution’ of implementation science and ‘research on research use’ and Oliver and Boaz’s16 overview of a series of articles on ‘making and using evidence’ help bring together learning from multiple approaches.
Third, the most frequent cautionary tale from this interdisciplinary field is that short-term and linear approaches—focusing largely on disseminating evidence—are not effective on their own. Even interventions which aim to package research attractively as possible through brief summaries or accessible toolkits do not lead to improved evidence uptake on their own. Rather, ‘relational’ interventions—such as to build relationships and trust over the longer term—are more supportive of longer term change.16 Yet, too many initiatives still imagine a linear process of learning in which the primary knowledge comes from researchers and is transmitted to practitioners.17 This approach has the potential to diminish respect for the essential knowledge that comes from working in policy and practice. In contrast, relational approaches foster more meaningful two-way exchanges to make sense of new evidence in specific contexts. Relational approaches also provide space for practitioners to bring their own knowledge (and that of other stakeholders). The amount of relational work, required to support the meaningful use of research evidence in practice, should not be underestimated.18
Fourth, pay proper attention to the wider context in which evidence use takes place. Avoid describing the evidence to policy or practice gap as primarily technical and amenable to simple, testable solutions. Instead, seek to understand how policy processes affect knowledge exchange. Policy studies offer a body of knowledge to explain the contextual challenges that affect evidence implementation in fields such as healthcare, and interdisciplinary scholarship on ‘systems’ approaches helps us to relate evidence use to: (1) complex organisational or policy processes that are not so amenable to simple solutions, or (2) a contested political process in which participants do not agree on what the problem is or may have beliefs or aims that will not be reconciled simply by increasing communication.19
Such studies highlight the need to foster evidence-using systems rather than focusing solely on useable evidence. In other words, think about how knowledge is produced, mobilised and used across a large network of organisations in which there is no single ‘centre’ or repository for useful evidence. How could people and organisations be supported, over the long term, to make evidence use a routine way of working across a large number and wide range of diverse organisations?
Taking a ‘systems approach’ (or fostering ‘systems thinking’) can involve rather different perspectives, to reflect the different meaning attached to systems in this field. First, policy studies may explore the contrast between simplified models of policymaking (such as via an orderly cycle of stages, including to define problems and generate solutions) and complex policymaking systems. Crucially, the latter defy central government control and are beyond the full understanding of any policy participant. This discussion encourages participants to dispense with the idea that evidence production and use can be part of a simple linear process in which there are clear roles and responsibilities and opportunities to engage.19 Second, studies of the design of evidence use initiatives focus on the providers and users of evidence who need pragmatic ways to engage effectively and reflect on their strategies. For example, studies of brokerage may identify effective points of intervention in relation to specific contexts (such as to identify different opportunities in, say, clinical or government health department settings) but also normative discussions about the goals of this shared endeavour. In other words, who are we doing this for, and who is benefiting? What interests are these interventions serving, and how can we ensure that we are maximising the value for both research and practice? It is only in this wider context that we can fully evaluate the role and value of initiatives such as knowledge brokerage.
Overall, while brokers are essential actors employing relational skills to oil the wheels of evidence implementation, they need to be embedded in supportive systems. So, it is to be expected that trials of brokerage show limited effects on their own. Evidence use initiatives may make use of brokers, but a broker working alone is unlikely to overcome the wide range of systemic challenges to evidence use.20
Ethics statements
Patient consent for publication
Footnotes
Twitter @Cairneypaul
Contributors PC led the process but each author made an equal contribution.
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Provenance and peer review Commissioned; internally peer reviewed.