Article Text
Statistics from Altmetric.com
- Healthcare quality improvement
- Implementation science
- Medication reconciliation
- Quality improvement methodologies
Many quality improvement (QI) interventions can be complex, comprising multiple inter-related components that target a range of factors which may lead to change. Some of these components can be focused on the nature of the improvement planned, the place where the change is to occur, the people who are involved and/or the structures and processes within the organisation itself.1 Understanding how the multiple components of such interventions work together to drive an improvement or in those instances where they fail to do so can be challenging. Without adequate assessment of the underlying processes and mechanisms through which change occurs, crucial learning on how best to deliver improvements may be lost to the wider system.
The second Multicenter Medication Reconciliation Quality Improvement Study (MARQUIS2) study is an example of a study in which a complex, multilevel and multifaceted intervention aimed to improve patient safety at care transitions in 18 North American hospitals by reducing the risk of medication discrepancies through mentored implementation of a multifaceted medication reconciliation toolkit.2 3 The toolkit comprised 17 ‘system-level’ intervention components, including training staff, identifying high-risk patients and conducting audit and feedback, and six ‘patient-level’ intervention components, including performing a best possible medication history, either inside the emergency department or once a patient has been admitted to hospital. Implementation was supported by clinically trained mentors with experience in QI methods, mentoring healthcare professionals and/or medication safety. Mentors coached each site via monthly calls and performed one to two site visits. An interrupted time series analysis of the 17 sites with sufficient outcome data previously showed that the MARQUIS2 intervention was associated with decreased rates of monthly unintentional medication reconciliation discrepancies in admission and discharge orders compared with baseline.3 Secondary analyses suggested that effects varied across sites and that delivery of system-level interventions alone was not associated with decreased rates, while receipt of patient-level interventions alone was.3
In this issue of BMJ Quality & Safety, Schnipper and colleagues therefore conducted an on-treatment analysis of outcomes based on levels of patient exposure to system-level interventions, to shed light on how this complex intervention drove improvement.4 The analysis was based on monthly surveys of site leads which asked if they had implemented any of the system-level interventions since the previous month, if there had been any expansion of the intervention (eg, to a new group of patients) or if any of the interventions had been discontinued. They conducted a similar on-treatment analysis for patient-level interventions, which was based on study pharmacist review of documented activities in the medical record.
They found that exposure to most system-level interventions was associated with small but significant reductions in the number of discrepancies per patient; receipt of patient-level interventions was associated with large reductions in discrepancies per patient, especially those where best practice medication history was taken in the emergency department. From this, the authors were able to conclude that the best way to reduce the risk of discrepancies (and particularly history discrepancies) is to get the medication history right as early as possible; in the emergency department rather than once a patient was admitted to hospital. While this insight of timing has real value in its own right, the authors also highlighted that they were unable to explain why and how differences between the more and less successful sites occurred. They suggest that further explanatory work, incorporating implementation science principles, would be required to inform future improvement efforts elsewhere. So what would implementation science principles entail? We suggest two possible contributions.
First, implementation science offers a route to mechanism-based explanation through theory-informed evaluation.5 Using theory to develop and guide evaluation is not exclusive to implementation science and indeed the need for more effective use of theory to guide QI efforts has been highlighted previously.6 Developing a pragmatic but coherent theory of change offers a route to understand how any proposed intervention—and its specific components—will work to bring about change in a given context. Providing this explicit description of an intervention and its anticipated effects would facilitate consideration of the type of assessments necessary to understand toolkit implementation.7 This would then ensure due consideration of what would be required to test any hypothesised mechanisms of change and/or any potential influencing factors to the change process itself.
Although the MARQUIS2 study did build on experiential and empirical learning from earlier work,8 9 there appears to be a lack of articulation (in the form of theory of change, programme theory or logic model) of how and why the key features of the improvement toolkit were expected to work together to reduce medication discrepancies. In their protocol, the authors did cite Brown and Lilford’s patient safety intervention framework,10 but no causal chain that links toolkit interventions to anticipated improvements is presented. The relative anticipated value of some intervention components, such as hiring and training new pharmacy technicians over and above training the existing staff to take best practice medication histories, is therefore unclear. Additionally, the planned evaluation of toolkit implementation focused on the number and type of components implemented and the dose of intervention that patients received. While these are necessary, they are not sufficient. Implementation is not static but better understood as a dynamic process that can lead to the displacement of existing practices with new and evolving ways of working.11 What this means in this context is that the evaluation of toolkit implementation did not consider what improvement processes the toolkit would initiate or change, how and with what effect. Such a focus on form over function and studying the dynamics of various processes influenced by the intervention would have helped to explain the relative value of many of the interventions’ components, and indeed the apparent differences between sites.
This brings us to the second contribution that implementation science can make. We argue that the importance of the facilitation role of the site mentors is underplayed in the authors’ analysis. Facilitation is a widely employed strategy in implementation science. It has been conceptualised as both a role and a process,12 with some commentators viewing it as a complex intervention in its own right.13 It can entail internal facilitators or, as is the case here, those external to the organisation, who bring a range of enabling skills and improvement techniques to support the process of change as it occurs. Project management skills, an ability to engage and manage relationships between key agents and an ability to identify and negotiate barriers to implementation are key features of facilitation.14
We suggest that in MARQUIS2, it is facilitation—and the processes of collective learning that it stimulates—that may be central to enabling hospital-level engagement with the toolkit that in turn influences outcomes. Building on learning from the earlier MARQUIS1 study,9 mentors engaged in 2-day site visits to participating hospitals as early as possible in the implementation period. Doing so may have enabled them to observe baseline practices, build relationships with site teams and their executive leadership and discuss and address any local barriers to toolkit implementation.
The effectiveness of facilitation is highly contingent on the way it is delivered,15 with significant efforts required to ensure that its core learning-oriented function remains adequately resourced and protected.16 Those acting as facilitators need to be flexible and responsive, tailoring their approach to the particular issue, setting and people involved,17 which may not always be easy in situations where power to initiate, enact and sustain change rests with other individuals and groups.16 Ideally, an a priori—or parallel—assessment of potential mechanisms of change could have surfaced the importance of site mentoring as the key enabling feature—the learning from the first MARQUIS study of the need to provide mentoring as early as possible does hint at this. Such an assessment would have emphasised the improvement processes mentors could be expected to influence, but also the personal characteristics and skill set that the mentors themselves would require to facilitate change.18 Any differences between the abilities of mentors to stimulate staff and organisational engagement with the toolkit to develop the knowledge, skills and processes necessary to trigger and sustain improvement over time could then have been explored. A greater understanding of the enactment and influence of the mentoring role would thus have been garnered as a result.
While the authors have surfaced a useful insight to the best way to reduce the risk of medication discrepancies, the research design adopted by the authors only sheds light on what aspects of an intervention drove improvement but not on how and why that happened. As such, an opportunity to reveal a more nuanced mechanism-based explanation was missed. Other research designs that incorporate concurrent qualitative and mixed methods will be required to generate mechanism-based learning,19 especially if other people are to build on the practical lessons learnt from this study in other settings.
Ethics statements
Patient consent for publication
References
Footnotes
Twitter @pmw777, @romankislov
Contributors PW wrote the first draft of the manuscript. PW and RK contributed to the revision and approved the submitted version of the manuscript. PW is the guarantor of the manuscript.
Funding PW and RK are in receipt of funding from the National Institute for Health Research Applied Research Collaboration Greater Manchester.
Disclaimer The views expressed in this editorial are those of the author(s) and not necessarily those of the National Institute for Health Research, NHS England or the Department of Health and Social Care.
Competing interests The authors declare that they have no competing interests.
Provenance and peer review Commissioned; internally peer reviewed.