Article Text

other Versions

Download PDFPDF

Meeting external demands to improve quality and safety of care: learning systematically from the literature
Free
  1. Jack Needleman
  1. Fielding School of Public Health, UCLA, Los Angeles, California, USA
  1. Correspondence to Dr Jack Needleman, Fielding School of Public Health, UCLA, Los Angeles, CA 90095-1772, USA; needlema{at}ucla.edu

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Practices that are known to be effective in improving patient safety are routinely either left unimplemented or not sustained in practice. Handwashing, checklists to reduce the risk of central line-associated blood stream infections and systems for medicine reconciliation all serve as examples of basic practices that fail too often. More complex practices or standards of care such as improving the management of chronic diseases like diabetes and hypertension can be even more challenging to implement and sustain.

Because of these challenges, understanding the barriers and facilitators to implementation is critical to achieving safe and effective care. The literature offers multiple frameworks for understanding the process and challenges of implementation. Notable among these are Greenhalgh and colleagues’ 2004 framework for diffusion of innovations in service organisations,1 Damschroder and colleagues’ 2009 Consolidated Framework for Implementation Research (CFIR)2 and the 2022 update to the CFIR framework.3 These frameworks, based on extensive literature reviews, seek to provide broad perspectives and highly structured and detailed models of all the factors and circumstances that can potentially influence implementation and diffusion. They have been widely used to structure reporting of the results of research on implementation. While their comprehensiveness is a strength, these efforts to review the full scope of potential barriers to implementation can also result in a loss of focus on what the most critical challenges and facilitators are that should demand attention. In addition, the challenges may be different for a subset of implementation efforts, such as health and social care standards where specific organisational structures or processes need to be adopted and sustained in response to external demands from regulators and accreditation organisations. Given the importance of regulatory and accreditation standards in health and social services, understanding the implementation challenges of meeting these external demands is critical.

In this context, the systematic review by Kelly and colleagues in this issue of BMJ Quality & Safety, and their summary of factors that influence implementation of health and social care standards4 are important contributors to increase our understanding in this area. The review includes 35 papers and grey literature documents that focus on implementation challenges of responding to standards and processes imposed through regulation, accreditation or other external parties. The authors use a highly structured process for systematic reviews of qualitative research developed by Sandelowski and colleagues.5 6 They identify six broad sets of facilitators, framed as themes, and six sets of barriers. The facilitators include standards that are adaptable and relevant to day-to-day practice, that there be key staff to lead, that the service implementing the standards collaborates with users and other stakeholders, that there be adequate resources for implementation, that the implementing services promote quality improvement and staff engagement with these efforts, and that there be accessible training, support tools and processes for monitoring performance. The barriers identified are often the complement to facilitators, with the identified themes including limited adaptability, services working in silos or with limited staff knowledge of the standards, services and service users having misconceptions of care or perceiving lack of support within the standards for healthcare professionals, service users and other stakeholders, poor access to resources, resistance to change due to culture or failure of the standards to be perceived as care improving or high priority, and lack of training, support tools and processes for monitoring performance. The authors expand on their identified themes through presentation of multiple thematic statements and additional description, with 22 thematic statements on facilitators and 24 on barriers.

Kelly and colleagues’ review is not anchored to a specific framework, nor does it attempt to be comprehensive. Rather, it reviews the findings from relevant studies and synthesises them into a series of themes and subthemes. By drawing upon and synthesising findings of implementation efforts, one can argue that the facilitators and barriers that are identified are the most salient—those that the organisations found most important to their implementation efforts. These can be mapped to the CFIR or Greenhalgh and colleagues’ frameworks to suggest where attention and effort are most critical.

The challenges in responding to external demands for performance and improvement

The Kelly paper addresses a narrow but important area of process and quality improvement—responding to external standards and expectations of performance. One of the key questions implicit in the Kelly paper is how similar or different the facilitators and barriers to successful implementation of externally demanded actions are from internally driven improvement efforts. For example, given that much of the implementation literature emphasises the importance of local champions and internal identification of the need for improvement, responding to external demands for change might be expected to impose additional organisational stresses and challenges than internally promoted efforts. However, despite the focus on externally driven efforts, many of the themes identified by Kelly and colleagues are similar to those identified in other implementation efforts. Among the authors’ findings are the importance of managerial leadership and commitment to implementing the standards, and the recruitment and availability of personnel who will act as champions and role models. Where the work is not internally driven, then creating internal commitment and engagement becomes one of the additional challenges to successful implementation. Also found to be important to implementing external standards are educational and other materials to explain the content of the standards, and credibility within the organisation that the standards will be associated with safety and quality improvements. Again, these challenges are not unique to externally imposed standards. While the synthesis notes that much of the implementation literature focuses on internally driven, that is, ‘bottom-up’, efforts and that more attention needs to be paid to ‘top-down’ processes, the synthesis offers only limited discussion of how implementation might differ for externally imposed standards compared with internally developed and championed efforts and how these might be studied.

For example, by definition, externally imposed standards relate to the ‘outer setting’ of the CFIR and the ‘outer context’ of Greenhalgh and colleagues’ framework. What is perhaps most interesting is that little is found in the Kelly paper about the strength of incentives, expectations of regulators or accreditors, or the perceived penalties of non-compliance with the standards as factors in motivating change or encouraging failure in implementation. The most significant finding in this realm is that monitoring and external assessments may be inconsistent across agencies and thus undermine standards’ credibility or view that the standards are broadly supported by the relevant external parties. By contrast, benchmarking and reporting were found to be important facilitators. Financial incentives as motivators were found to have only moderate support in the literature reviewed.

The authors note that there has been a lack of recognition across implementation science for characteristics of the political and outer context environments. More can be done with these findings and more broadly to better differentiate the organisational challenges of externally versus internally driven improvement.

Other factors influencing implementation

Across the themes describing barriers and facilitators, only a limited range of issues were identified with respect to the content or nature of the standards themselves. Adaptability, perceived impact on quality, concerns that implementing the standards might negatively affect care, and relationships among providers or between providers and patients stand out in the synthesis. Simplicity, feasibility and tailoring to implementation were identified as facilitators. Interestingly, the extent to which the standard calls for change of practice was not identified as a barrier.

Many of the identified facilitators and barriers relate to culture and resources. Some easily map to the CFIR and Greenhalgh and colleagues’ frameworks, like the roles of champions, collaboration and staff engagement. The biggest addition this study makes to the literature on implementation is highlighting as facilitators the importance of educational materials, of explicit efforts to educate staff on the standards and provide them formal support (via materials and staff education), and of staff expertise. While these are present in the formal frameworks, they do not stand out as clearly as they emerge in the current systematic review.

The value of a structured approach to systematic reviews

A key contribution of the systematic review by Kelly and colleagues is its formal implementation of the Sandelowski and Barroso framework for synthesising qualitative research and constructing a meta-summary.5 6 Adoption of this approach provides additional structure to the review process and creates additional understanding of the evidence base being summarised. The paper provides a powerful example of the value and use of this framework.

Once the literature to be reviewed was identified, Kelly and colleagues structured their work into four components. The first was assessment of the conclusions regarding implementation, which was drawn from the discussion sections of the papers reviewed. Conducting systematic reviews can be complex, and this approach avoids the need to reanalyse or reassess the data in the analysis. Given the large number of studies included in the review that were based on surveys of stakeholders or participants in the implementation process, relying on the original authors’ assessments of their findings rather than reviewing and interpreting reported survey results substantially reduces the workload. This approach is, however, dependent on the initial authors correctly interpreting and presenting their findings.

The second component is the qualitative analysis of this data set and construction of themes and other analyses. This draws upon standard approaches to qualitative analysis and is well reported by Kelly and colleagues.

The final two components add structure and information to the synthesis. The third component involves the authors evaluating the strength of each study based on the methodology employed. This methodology-specific assessment offers more detail and relevance than the use of a generic framework for evaluating research quality. The fourth component of the synthesis goes beyond the assessment of the strength of individual studies to provide a formal analysis of the strength and coherence of their overall findings, taking into account the number of studies, consistency of findings across studies and strength of the individual studies used as evidence for their constructed themes and subthemes. This is an important addition to the methods for systematic reviews. As one examines tables 1 and 2 in Kelly and colleagues’ paper, and the colour coding of the assessment of the strength of analysis, one is struck by how many of the conclusions merit high confidence, and how few are characterised as low confidence. Among facilitators, the conclusions with the lowest confidence are those on the facilitating value of budget and facilities. With respect to barriers, the findings with the lowest confidence are those related to judgements that the standards are not in fact norms for high-quality care and that inconsistent external judgements and use across monitoring agencies create barriers to implementation.

Kelly and colleagues also provide an analysis of their database, using two measures: the frequency effect size, that is, the proportion of studies in which the facilitator or barrier is reported, and the intensity effect size, which is a study-level measure of the proportion of identified facilitators or barriers that are identified in an individual study. Both provide information about the database for the systematic review, although neither should be interpreted as a measure of importance for a given facilitator or barrier. One of the strengths of the approach taken by Kelly and colleagues is that it identifies facilitators and barriers that are salient based on the extent to which studies have identified them as such. But salience is only imperfectly correlated with importance, and the frequency with which a factor is mentioned across studies is likewise only imperfectly correlated with importance. While we might draw the conclusion from the study of actual implementation activities that items in the CFIR or other frameworks that are never mentioned are unlikely to be important facilitators or barriers, the opposite conclusion cannot be drawn. Facilitators or barriers that are frequently mentioned may vary substantially in the extent to which they aid or impede implementation. We need different methods, analyses and metrics to measure how large the effect of a given facilitator or barrier is. That is one of the critical challenges for future research on implementation.

Moving forward

Future research or systematic reviews of past research need to further analyse how challenging or easy it is to overcome a barrier and how much tailwind having a specific facilitating factor is in moving implementation forward. There is also a need for studies that help organisations understand not only how important barriers or facilitators are, but particularly how barriers can be reduced and facilitators can be cultivated. Are some barriers more easily overcome, or can success be achieved without fully addressing some barriers? These are critical questions that organisations need guidance on, given the limited time and resources often available for implementation, and the need for organisations to demonstrate success in these activities.

More research is also needed on the variance in performance in successful implementation, to understand who is more or less likely to be successful in making and sustaining change. A recent study examining the determinants of regulatory compliance in health and social care services, largely drawing on studies of nursing homes, found higher compliance in smaller facilities and those with higher nurse staffing levels and lower turnover.7 More studies examining variation in performance, the factors associated with variance and examination of the characteristics of positive deviants will strengthen our understanding of implementation and improve the likelihood that organisations can succeed in meeting both external demands and internal drivers to increase safety and quality.

Ethics statements

Patient consent for publication

Ethics approval

Not applicable.

References

Footnotes

  • Twitter @JackNeedleman

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Provenance and peer review Commissioned; externally peer reviewed.

Linked Articles