Article Text
Statistics from Altmetric.com
Do we care if a quality improvement (QI) innovation is effective, if it is not sustained? This uncomfortable question is increasingly important as healthcare is judged (and reimbursed) on ‘quality’ and ‘value’. Often, a sentinel safety event or dip in performance on a quality measure tied to reimbursement spurs a ‘quick fix’ mentality. However, considering how to ‘fix the problem’ in such a way that it is permanently fixed—in other words—that the ‘fix’ becomes part of everyday practice routines, is essential. This is not easy. Reviews of the extant literature point out how little we know about how to do this successfully1–4 and conceptual models drawing on this literature also vary widely in what they consider to be important key contributors to sustainability.5–9 When empirical literature does exist, it often demonstrates the lack of sustainability of QI interventions,10 11 and almost no studies describe how QI interventions became adopted in practice and why.12 13
What changes when a QI initiative ends?
At some point, the active implementation of a QI intervention ceases at the end of a local QI project or larger multicentre collaborative. Research that carefully describes which components of an intervention remain, which ones end and why, is essential to understanding why the effects of some QI interventions on processes or outcomes are sustained. In this context, we welcome the substantial contribution to the empirical literature that Schechter et al make in this issue of BMJ Quality & Safety.14 The authors describe how the pathway for improving paediatric asthma care (PIPA) intervention was rolled out across 45 community hospitals during the national QI collaborative, focusing on early administration of bronchodilators via metered-dose inhalers, screening for secondhand smoke exposure and caregiver referral to smoking cessation resources, if appropriate. The collaborative lasted 12 months. Prior to its end, the investigators approached the 34 community hospitals who remained in the collaborative, asking them to agree to ‘sustainability monitoring’ for an additional 11–14 months (23 of 34 agreed).
The collaborative provided substantial support for active implementation of the intervention, including care pathways, educational materials and seminars, QI mentorship, monthly audit and feedback, a free ‘app’ with clinical and care pathway resources and peer-to-peer learning opportunities. At the end, the local site leaders had meetings with their external QI mentors to explicitly facilitate sustainability planning, a novel aspect employed by this collaborative compared with other studies. The authors state they wanted to have a ‘real-world’ evaluation, in the sense that multiple supports stopped when the collaborative ended such as audit and feedback, meetings with QI mentors, peer-to-peer learning sessions and educational seminars. The local site leads continued to have access to the care pathways, educational materials and the mobile app. Importantly, they also continued performing chart reviews to measure adherence to the PIPA intervention. Using a rigorous evaluation design, the authors evaluated the extent to which the effect of PIPA continued at sites after these active supports were withdrawn following dedicated sustainability planning.
The authors found withdrawal of support had mixed effects on sustainability. For example, screening for secondhand tobacco continued at similarly high rates. Early administration of bronchodilators via a metered-dose inhaler, however, sustained a major drop-off immediately after active supports were withdrawn. However, it recovered during the sustainability period, such that rates at the end of the sustainability period were similar to those achieved during the collaborative. In contrast, caregiver referral to smoking cessation resources exhibited both a large drop-off once supports ended and continued to decline over the sustainability period. The authors suggest these drop-offs may have occurred because of reduced QI resources, shifts in QI priorities and/or decreases in local QI activities, since strategies to enhance sustainability are resource-intensive and beyond the reach of these community hospitals. Others have posited that the complexity of the interventions and whether they are incorporated as revised performance standards (for complex interventions) or automation (for simple interventions) explain why some interventions are sustained and others are not.9 Although the data were not collected to test these hypotheses or understand why some practices were more sustained than others, emerging literature suggests that there are ways to make common QI interventions more likely to be sustained. Good examples of this emerging literature can be found in the pages of our journal and are highlighted below. We suggest QI practitioners consider two main strategies: plan for sustainment and think creatively about tweaking common QI interventions to make them more sustainable.
Planning for sustainment in practice
The paper by Schechter et al shows that even though sustainability planning was facilitated towards the end of the collaborative, some practices were not sustained without the resources available. So, what practical guidance can be taken from this and other efforts to inform future QI initiatives?
First, do not just do something—stand there. The enhanced attention to quality metrics among health system leadership—particularly when tied to reimbursement—runs the risk of a ‘knee-jerk’ mentality when the health system ceases to perform adequately on a measure. Implementing a bundle of common QI interventions (order sets, alerts, audit and feedback, the dreaded staff ‘education’) hastily may provide a temporary solution but is not sustainable. While a charismatic leader can create the necessary environment for resources to be directed to a quality or safety problem, this is necessary but not sufficient for lasting systemic change. Sustainable QI interventions must provide solutions for the underlying problem and are more effective if they simplify clinical workflows. A significant quality or safety problem is an opportunity to deeply understand the process—and only through this understanding, can both an effective and sustainable intervention be created that becomes part of everyday practice routines.15 This requires bringing all key stakeholders along on the journey of understanding, not just the journey of action.16 Such an approach may feel too ‘slow’, and certainly calls for being selective about which problems are significant enough to devote the time.17 In addition, it may require that attention first be focused on finding an effective intervention—and then solving how it can be integrated in workflow to be sustainable, using resources that will also be available after the QI project or collaborative ends. From publications in our own journal, we perceive a direct correlation between the time spent understanding the problem, the workflow and the priorities of the leadership and staff (reflected in sophisticated key driver diagrams) with how likely the QI intervention and associated outcomes are to ultimately be sustained.
To convince healthcare leadership of the need for time and resources to understand the problem, rather than to act and ‘fix the problem’, QI practitioners might suggest an audit of the healthcare systems’ own centre of innovation, QI or patient safety to see how many of their novel interventions were sustained over time. This humbling process may prompt a rethink more broadly, particularly in the context of the Learning Health Systems model. We contend that it is not the volume of QI interventions generated by such centres that is a metric of success: rather, it may be the ability to sustain those interventions. In other words, effective organisations are not just Learning Health Systems, but also Remembering Health Systems.18–20
Second, when selected as a relevant problem to be fixed permanently, QI practitioners should take advantage of Plan-Do-Study-Act (PDSA) cycles to plan for sustainment early on. Because one adds (or removes) components individually and measures their impact, PDSA cycles allow for careful isolation and refinement of the ‘active ingredient(s)’ and the resources necessary for the intervention to be effective. It may not be clear at the outset what is most likely to be effective or sustainable. For example, a group of investigators trying to improve adherence with low-tidal volume ventilation for acute respiratory distress syndrome found audit and feedback moderately effective but resource-intensive. Much more effective (and likely more sustainable) was changing the default ventilator setting to a lower set volume.21
When done correctly, PDSA cycles will also give important insights into barriers to implementation and sustainability of the ‘active ingredient(s)’ mentioned above. One notable example comes from an intervention intended to improve timely delivery of antibiotics to febrile, immunocompromised children in the emergency department.22 Over a 5-year period, the authors conducted no fewer than 40 PDSA cycles—the majority in the first year focused on effectiveness, and as time went on, focused on sustainability. When this team started, no immunocompromised child with fever received antibiotics within 60 min. A year later, they had clearly found an effective intervention (89% received antibiotics within 37 min on average) and 4 years and several more PDSA cycles expressly attending to sustainment later, this had increased to 95% of children.
Modify QI interventions to enhance sustainability
Similar archetypes of QI interventions are commonly used to address QI problems: education and training, order sets, alerts and forcing functions in the electronic health record, audit and feedback and sometimes wholesale system redesign. For example, QI practitioners often reach for ‘training’ or ‘education’ as part of interventions. However, this is rarely effective on its own or sustainable particularly in the context of staff-turnover and constitutes ‘low-value’ QI.15 23 Sophisticated QI practitioners might choose interventions that seem easier to scale (order sets, alerts) but these are vulnerable to provider fatigue and ignoring the intervention, thereby not resulting in sustained effects. Similarly, ‘forcing functions’ or constraining provider choice as a system-level intervention can be both effective and sustainable, but are costly to implement, require continuous feedback from end-users to inform iterative design and can result in unintended consequences in terms of undesired workarounds and delay in care.24 We suggest three methods to improve sustainability of common QI interventions, drawn from published case examples (table 1).
First, we encourage leveraging the role of families and caregivers as key partners in the design and sustainability of QI interventions, as their ongoing involvement can promote sustainability. While sustained reductions in central-line associated bloodstream infections (CLABSIs) have been successfully achieved in some settings,12 13 others have found ongoing compliance audits resource-intensive and not clearly linked with sustainability. A recent study showed that involving a Patient and Family Advisory Board led to a completely different shape of random audits and involving families more actively in central line maintenance by helping them understand the safety practices aimed at preventing harm, resulted in sustained CLABSI reductions and a 97% compliance rate even 2 years following the intervention.25 The table shows two other case examples of patients’ active involvement resulting in improvements sustained.
As a second method, we urge interventionists to avoid ‘availability bias’ and broaden their view of tools that could be used to promote sustainability.26 Spending time to understand the problem can lead to novel tools to solve it. For example, in an effort to increase neonatal hepatitis B vaccination (HBV) rates, QI practitioners discovered that legal and institutional barriers were the principal cause of not adhering to revised guidelines and low rates of timely vaccination. Changing these barriers to allow verbal consent (instead of the previously required written consent), and to allow consent from the parent who was not the biological mother (whose medical condition could otherwise induce delay), greatly improved HBV rates in infants in the neonatal intensive care unit.27 This QI team could easily have deployed an educational intervention about the importance of HBV or created alerts to flag that an infant had not received their vaccine—but neither would have sufficiently addressed the root problem or created a sustainable fix. After understanding the problem, their solution also made the work of obtaining consent easier, a critical improvement in overall workflow.
Finally, if common QI tools are the best fit for the problem, we suggest that making them ‘count twice’ or adapting the intervention to fit existing workflows improves both effectiveness and sustainability. For example, in an intervention seeking to reduce sedative-hypnotic prescriptions in the hospital, ‘education’ of students and housestaff was not effective.28 However, involving pharmacists to review new orders for sedative-hypnotics and providing ‘just-in-time teachable moments’ for trainees made it much more effective, combining education and just-in-time audit and feedback. It also led to engagement of more members of the healthcare team—distributing responsibilities and simplifying the intervention for individuals.
Improving our understanding how to successfully sustain QI improvements
Although not explicitly requested as part of SQUIRE guidelines, we recommend QI interventions report with a description of the intervention and also how it was designed to be sustainable, ideally by using a sustainability framework that can help teams explicitly address this requirement.4 Second, we encourage QI interventionists to ensure that the duration of study is long enough to assess sustainability— something unfortunately uncommon.1 If reported systematically, it will improve the evidence base and understanding on the underlying mechanisms by which QI interventions will become fully integrated in practice and sustained. However, what ‘counts’ as sustainment?2 We recommend evaluation similar to the Schechter et al study, to monitor sustainability at least 6 months and preferably a year following the active intervention. Such early data are likely sufficient to suggest if the intervention will not be sustained, giving a clear signal in this timeframe or recovery after an initial drop-off.
Evaluating how QI interventions are sustained is not just an academic exercise and does not just apply to individual practitioners. All health systems are increasingly measured and reimbursed based on ostensible quality or value. As others have aptly put it, ‘if we want more evidence-based practice, we need more practice-based evidence’.29 More publications like Schechter et al are needed to provide practice-based evidence that add to our understanding of how successful QI interventions are sustained over time. Such study is essential to healthcare systems becoming the high-quality and high-value systems so desperately needed.
Ethics statements
Patient consent for publication
References
Footnotes
Twitter @BBurkeMD
Contributors REB and PJM-vdM both contributed to conception of the paper, critically read and modified subsequent drafts and approved the final version. Both authors are editors at BMJ Quality & Safety.
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Provenance and peer review Commissioned; internally peer reviewed.