Article Text

Ten challenges in improving quality in healthcare: lessons from the Health Foundation's programme evaluations and relevant literature
  1. Mary Dixon-Woods,
  2. Sarah McNicol,
  3. Graham Martin
  1. Social Science Applied to Healthcare Improvement Research Group, Department of Health Sciences, School of Medicine, University of Leicester, Leicester, UK
  1. Correspondence to Professor Mary Dixon-Woods, Social Science Applied to Healthcare Improvement Research Group, Department of Health Sciences, School of Medicine, University of Leicester, 2nd Floor, Adrian Building, University Road, Leicester LE1 7RH, UK; md11{at}le.ac.uk

Abstract

Background Formal evaluations of programmes are an important source of learning about the challenges faced in improving quality in healthcare and how they can be addressed. The authors aimed to integrate lessons from evaluations of the Health Foundation's improvement programmes with relevant literature.

Methods The authors analysed evaluation reports relating to five Health Foundation improvement programmes using a form of ‘best fit’ synthesis, where a pre-existing framework was used for initial coding and then updated in response to the emerging analysis. A rapid narrative review of relevant literature was also undertaken.

Results The authors identified ten key challenges: convincing people that there is a problem that is relevant to them; convincing them that the solution chosen is the right one; getting data collection and monitoring systems right; excess ambitions and ‘projectness’; organisational cultures, capacities and contexts; tribalism and lack of staff engagement; leadership; incentivising participation and ‘hard edges’; securing sustainability; and risk of unintended consequences. The authors identified a range of tactics that may be used to respond to these challenges.

Discussion Securing improvement may be hard and slow and faces many challenges. Formal evaluations assist in recognising the nature of these challenges and help in addressing them.

  • Adverse events
  • epidemiology and detection
  • qualitative research
  • culture
  • quality of care
  • medical error

This is an open-access article distributed under the terms of the Creative Commons Attribution Non-commercial License, which permits use, distribution, and reproduction in any medium, provided the original work is properly cited, the use is non commercial and is otherwise in compliance with the license. See: http://creativecommons.org/licenses/by-nc/3.0/ and http://creativecommons.org/licenses/by-nc/3.0/legalcode

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Despite continuing evidence of problems in patient safety and gaps between the care patients receive and the evidence about what they should receive, efforts to improve quality in healthcare show mostly inconsistent and patchy results.1–3 There is an increasing interest in explaining why and, in particular, in identifying the barriers and enablers to improvement.4 ,5 One potentially valuable source of learning is to be found in formal evaluations of programmes to improve quality in healthcare.

A large portfolio of such programmes (table 1) has been assembled by the Health Foundation, an independent charity working to improve healthcare quality in the UK. The programmes have diverged in their scope and remit, but all are united by their focus on technical skills, leadership development, clinical engagement, capacity, knowledge and the will for change. In a perhaps unique contribution, the Health Foundation has commissioned independent evaluations of each of them. The evaluation reports6–18 represent a resource that could provide generalisable insights into the challenges faced in trying to improve quality in healthcare and how improvement processes could be optimised.

Table 1

Health Foundation evaluation reports

In this article, we provide a review of the findings of these reports and specifically focus on the challenges to implementation of the improvement efforts. To draw out wider lessons, we set the learning from the reports in the context of relevant literature.

Methods

We reviewed evaluation reports relating to five Health Foundation programmes (table 1). Further details of each of the reports, are provided in a longer version of this review on the Health Foundation's website (www.health.org.uk/overcoming-challenges).

We began by reading each report carefully. The reports varied in quality, length, intended audience and level of detail, but we did not make any decisions about inclusion or exclusion of evidence based on these characteristics. We thus make no comments about the strength of evidence we present, though we have sought to ensure that all claims are well supported by both the findings from the analysis of reports and by a corresponding research base.

To undertake the analysis and synthesis, we initially programmed into NVivo a thematic framework for coding based on Damschroder et al.'s consolidated framework for implementation science (see summary in online supplementary appendix A).19 This framework was selected to enable a rapid preliminary classification of the material, and our approach thus has a number of similarities to ‘best fit’ evidence synthesis, which is based on the framework analysis technique.20 ,21 We also conducted a rapid narrative review of organisational factors that are likely to hinder improvement efforts, with the primary aim of illuminating and deepening the understanding of the findings in the evaluation reports through linking to relevant academic literature. We built on a literature review in a relevant area22 and a combination of professional expertise,23 reference chaining and expert recommendation. We

  • treated the review question as a compass, not an anchor, so that the question was open to being refined as the review proceeded;

  • used iterative, intuitive searching of literatures combined with more formal systematic searching techniques;

  • engaged in selective, judicious sampling of relevant literatures;

  • sought to integrate the various literatures through a narrative argument.

The areas of literature in which we searched included: organisational studies; medical, economic and institutional sociology; social and community psychology; critical development studies; social movements and innovation and diffusion studies. We examined original empirical research, theoretical and conceptual work and reviews (both systematic and narrative). On every topic that we discuss, there is an extensive associated literature, and we make no claim to comprehensiveness. Given the potential for a vast and overwhelming presentation, and a concern with making this review accessible for non-academic audiences, our review is necessarily selective and only sufficient literature to support the points made is cited.

The preliminary framework was modified substantially as we refined our analysis, discussed the emergent findings within the project team and integrated the relevant literature. The final framework is represented by the thematic headings and sub-headings of this article. There are other ways in which the same material could be organised, and our choice of presentation here does not represent any attempt to impose a hierarchy at the importance of particular themes but rather an effort on clarity.

Findings

The Health Foundation programmes, taken in the round, intervened at many different levels, from the individual to team and from organisation to system. All included varying degrees and types of support and facilitation from the Health Foundation. However, most learning from the programme evaluations and related literature applies both to externally and internally initiated improvement efforts. Our findings reflect and are constrained by the nature of the programmes and their interventions and by the nature and reporting of the evaluations. However, a number of important themes emerge across the reports that are likely to be useful for most improvement efforts. We identified 10 key challenges (box 1) in securing improvement, covering three broad themes: challenges 1–4 relate to the design and planning of improvement interventions; 5–8 describe organisational and institutional contexts, professions and leadership and; 9 and 10 refer to sustainability and spread beyond the initial intervention period and unintended consequences.

Box 1

How to address ten challenges in improvement

Design and planning of improvement interventions

Challenge 1: Convince people that there's a problem

Use hard data and to secure emotional engagement by using patient stories and voices.

Challenge 2: If you do it, will it work? Convince people of the solution.

Come prepared with clear facts and figures, have convincing measures of impact and be able to demonstrate the advantages of your solution.

Challenge 3: Data collection and monitoring systems

This always takes much more time and energy than anyone anticipates. It's worth investing heavily in data from the outset. Assess local systems, train people and have quality assurance.

Challenge 4: ‘Projectness’ and ambitions

Over-ambitious goals and too much talk of ‘transformation’ can alienate staff if they feel the change is impossible. Instead match goals and ambitions to what is realistically achievable and focus on bringing everyone along with you. Avoid giving the impression that the improvement activity is unlikely to survive the time-span of the project.

Organisational and institutional contexts, professions and leadership

Challenge 5: Organisational context, culture and capacities

Staff may not understand the full demands of improvement when they sign up, and team instability can be very disruptive. Explain requirements to people and then provide ongoing support. Make sure improvement goals are aligned with the wider goals of the organisation, so people don't feel pulled in too many directions.

Challenge 6: Tribalism and lack of staff engagement

Overcoming a perceived lack of ownership and professional or disciplinary boundaries can be very difficult. Clarify who owns the problem and solution, agree roles and responsibilities at the outset, work to common goals and use shared language.

Challenge 7: Leadership

Getting leadership for quality improvement right requires a delicate combination of setting out a vision and sensitivity to the views of others. ‘Quieter’ leadership, oriented towards inclusion, explanation and gentle persuasion, may be more effective.

Challenge 8: Incentivising participation and ‘hard edges’

Relying on the intrinsic motivations of staff for quality improvement can take you a long way, especially if ‘carrots’ in the form of incentives are provided—but they may not always be enough. It is important to have ‘harder edges’—sticks— to encourage change but these must be used judiciously.

Beyond the intervention: sustainability, spread and unintended consequences

Challenge 9: Securing sustainability

Sustainability can be vulnerable when efforts are seen as ‘projects’ or when they rely on particular individuals.

Challenge 10: Side effects of change

It's not uncommon to successfully target one issue while also causing new problems elsewhere. This can cause people to lose faith in the project. Be vigilant about detecting unwanted consequences and be willing to learn and adapt.

Design and planning of improvement interventions

Challenge 1: Convincing people that there is a problem

One fundamental, but often poorly met, challenge for improvement efforts is that of convincing healthcare workers that there is a real problem to be addressed. Clinicians and others may argue that the problem being targeted by an improvement intervention is not really a problem; that it is not a problem ‘around here’; or that there are far more important problems to be addressed before this one.6 ,15 Trying to convince clinical teams who think they are already doing well to change is likely to be futile unless they can be shown that action is really needed.

Those designing and planning interventions should be careful to target problems that are likely to be accepted as real. Possible strategies for establishing the problem as a problem include hard data to demonstrate its existence, patient stories to secure emotional engagement,24 engage the clinicians in defining what they would like to improve in their service and show that there is a ‘relative advantage’ in implementing the intervention.25

Challenge 2: Convincing people that the solution chosen is the right one

Improvement interventions are often ‘essentially contested’: everyone may agree on the need for good quality but not on what defines good quality or how it should be achieved. Clinicians and others may resist change on grounds that interventions lack sufficient evidence or are incongruent with preferred ways of practising that already appear to deliver good results.8 ,26 Ensuring that there is good quality scientific evidence to support interventions, and that implementers are well briefed and capable of handling challenge, is therefore critical.6 ,27 One strategy for ensuring acceptability of interventions involves using well-facilitated forums to discuss and debate the evidence and expose it to challenge, rather than hoping that the evidence will ‘speak for itself’.28 ,29 It may also help if improvement efforts are underpinned by a clear and explicit ‘programme theory’8 an account of the activities to be undertaken, and the causal links between these activities and the outcomes sought.30 Among other things, a programme theory makes explicit why an intervention is likely to work, and helps clarify focus and strategic direction. Considerable effort needs to be invested in the initial programme theory, but it should not be regarded as fixed and immutable; it may develop over time as those engaged in the programme learn from their experiences of implementation.24

Challenge 3: Getting data collection and monitoring systems right

Data collection and feedback are indispensible to improving quality. Data help in demonstrating the scale of a quality problem and show what is happening in response to an intervention. But data collection, monitoring and feedback systems are remarkably hard to get right: they are often poorly understood, poorly designed and poorly implemented.15 Local teams may lack expertise and experience in collecting and interpreting data or they may struggle with systems that are designed for collecting administrative and clinical data but not for monitoring quality.8 Measures that are excessively burdensome, or not seen as credible by the target community, risk alienating, rather than engaging, clinicians and producing confusion about how far changes are real.6 ,9 Poorly chosen measures can also provoke gaming where participants are incentivised to produce the desired numbers without the intended changes in practice.31 Measurement systems need to be explicitly designed into improvement activities from the start, and they need to be adequately resourced.32 ,33 Systems need to be fit for purpose and avoid imposing excessive burdens or other unintended consequences, and staff need training on how to collect and interpret data.

Challenge 4: Excess ambitions and ‘projectness’

Enthusiasm for improving quality is very natural but it can easily overwhelm the available resources. Ambitious ‘stretch goals‘ and talk of ‘transformation‘ may risk alienating people early on and later lead to disillusionment if aims are not realised. The scale of resource required to support improvements is often underestimated,6 ,11 but without adequate financial support, infrastructure, managerial skills and dedicated time, efforts to improve quality can quickly run into difficulties.34 Difficulties can be compounded when new initiatives are not given a diagnosis phase or enough time to ‘bed in’.25 Activities such as team- and relationship-building are time-consuming, especially when they start from a low base, and it may be hard to sustain enthusiasm and effort over long periods and maintain focus when interests and priorities move elsewhere.15 The scale and demands of improvement interventions therefore need careful assessment at the outset and the implications of involvement need to be explained.17

Improvement efforts are also prone to acquiring a ‘project’ status that can bring opportunities but also threats. Though projects can be key tools for introducing novel work practices,35 they offer benefits, including a distinctive focus, identity and drive, excitement and interest. If projects lack ongoing senior managerial support, they may be undertaken at the margins of mainstream activities, and it can be difficult to make the transition from project to institutionalisation as part of wider organisational policies, procedures and norms. Perhaps most corrosively, activities seen as time-limited risk simply being tolerated or ignored until they go away by coming to an end. This points to a need to find a compromise between harnessing the distinctiveness of projects as a tool for change and ensuring that such projects are also aligned with the wider ‘direction of travel’ of organisations.

Organisational and institutional contexts, professions and leadership

Challenge 5: Organisational cultures, capacities and contexts

Trying to secure improvement in situations where organisational capacity is inadequate, and culture is adverse, can result in emotional exhaustion and evaporation of support.36 Differences in morale, leadership and management in organisational settings may lead to variation in outcomes.15 ,37 ,38 Organisational cultures supportive of personal and professional development, and committed to improvement as an organisational priority,39 are, unsurprisingly, more likely to provide an environment where improvement efforts can flourish.12 However, some clinical staff may be actively hostile towards improvement efforts, or simply put little effort into their support.40 Some managers can be too busy to take an interest in improvement projects or may even feel threatened.12 Attempts to secure resources, such as budgets and release of time to support improvement, may sometimes be seen by managers as illegitimate or as political acts by clinical staff, and handled accordingly.36 The complexity of many interventions can also pose significant challenges for organisations. Lack of adequate structures to support improvement activities often means creating new systems and processes from scratch.6 ,9 Team instability—arising, for example, from lack of succession planning, rotating staff, shift patterns and use of agency staff—can result in stalled progress or make it difficult to sustain collective knowledge and enthusiasm.6 Outer contexts, including shifting policy agendas and regulatory requirements, can be a major barrier,11 ,12 because of their effects of organisational turbulence and staff distraction and instability of structures and teams. Problems can occur when improvement efforts run counter to centrally driven national pushes and pressures or are introduced into environments already suffering organisational stress from mandated requirements. At senior management level, interventions that fit with strategic goals and organisational aspirations are more likely to be met with active enthusiasm.41 Involving service users in organisational change may increase its legitimacy and its chances of success, ensure that improvements are focused on patients' priorities, and assist in dissemination activities.15 However, evidence that user involvement improves quality and outcomes remains limited42 ,43 and many challenges still remain.44 External support from professional societies or consultants may also be important in overcoming limitations of local expertise and capability. However, the extent to which external support can compensate for major structural and resource deficits or adverse organisational cultures is unclear.

Challenge 6: Tribalism and lack of staff engagement

Engaging staff and overcoming a perceived lack of ownership are among the biggest challenges in improvement efforts.45 Boundaries between professional, disciplinary and managerial groups present important obstacles to change, and consensus within one profession is not always shared by others.15 Middle managers and frontline staff can be especially difficult to engage in improvement, because they already face numerous, complex, competing clinical and organisational demands, often with inadequate staffing, limited resources and equipment shortages.6 Resistance to improvement efforts can also result from attempts to guard professional autonomy and suspicion about externally led change.46 Yet professional norms, values and networks can also offer an important resource in seeking to improve care7; professions can often secure conformity of their members to norms and standards more effectively than managerially led efforts.47 ,48 Avoiding a situation where improvement is seen simply as a managerial intrusion into professional concerns is thus important.25 ,49 However, there is a danger that being too deferential to existing norms, values and behaviours may result in failure to challenge poor quality practices. Norm-disrupting tactics may be needed to confront institutionalised complacencies.24 Tapping into profession-specific networks, norms and values can help mobilise commitment and enthusiasm more effectively than coercive tactics.25 ,50 ,51 For example, peer-led audit can achieve high participation and trusted results.15 Peer support, though resource-intensive, can also produce highly valued interaction and help to sustain momentum, build confidence and provide a source of encouragement and motivation through sharing common problems.11 ,14 ,15 However, groups may require expert help in teamwork and relationship management to realise their potential.11

Challenge 7: Leadership

Leading improvement efforts well is challenging and delicate, requiring a combination of technical skills, facilitation skills and personal qualities.12 It needs to happen at multiple levels and needs to ensure alignment with staff priorities, and active work among staff to foster collaboration and engagement with improvement aims.25 ,39 ,46 ,52 Respected individuals can play a vital role in encouraging colleagues across different professions.15 Key to success may be ‘quieter’ leadership, less about bombastic declarations and more about working to facilitate collaboration.53

Challenge 8: Incentivising participation and ‘hard edges’

Busy clinicians may need incentives if they are to prioritise improvement activities. Many improvement efforts seek to draw on the intrinsic motivation of healthcare professionals to maximise the quality and effectiveness of the care they provide for patients. Visible improvements and unequivocal evidence of potential patient benefit through credible feedback6 can encourage greater clinician involvement in what may otherwise be seen as relatively low-status activity with poor rewards.16 ,18 However, ‘softer’ modes of persuasion are sometimes enough to stimulate changes to practice. A combination of soft persuasive tools and ‘harder edges’ that involve a firmer approach to leading change may be needed—for example, using peer review and audit as a means of both supporting change, and reminding participating sites that they are being held to account, or including involvement in quality activity as a criterion for engagement in continuing professional development and revalidation.15

Beyond the intervention: sustainability, spread and unintended consequences

Challenge 9: Securing sustainability

Besides their potential to meet resistance at their inception (see challenge 4 above), ‘projects’ may be especially vulnerable to challenges of sustainability. Clinicians' and managers' interest may dwindle when, at a project's end, they are faced with new, competing priorities.15 Most initiatives have to be resource neutral, or use existing resources more efficiently, if they are to continue.15 Sustainability is threatened when there is over-reliance on certain individuals and by assumptions that interventions will simply diffuse on their own or readily transplant from one context to another.15

The available evidence suggests the need for explicit attention to ‘spreading learning and sustaining change’ from the outset.8 ,12 Demonstrating clinical effectiveness, efficiency and mainstream relevance is important to this,8 as is ‘locking in’ changes by adapting performance management policies, organisational infrastructure and institutional processes.34

Challenge 10: Risk of unintended consequences

Though it is often assumed that quality improvement programmes are harm-free, there is some evidence that they can produce unintended and unwanted consequences54—including, ironically, that of souring clinicians against quality improvement.6 Attention is needed to alert on the possibility that improvement efforts may produce iatrogenic effects. In a few projects, for example, there were unexpected opportunity costs, which were felt by some to outweigh any benefits.

Conclusions

The Health Foundation's evaluation reports offer a rich resource of learning for those who undertake improvement work in healthcare. Coupled with insights from the wider literature on improvement and organisational change, they provide some important messages about what is likely to work in improving the quality of health services and what pitfalls to anticipate. Perhaps the most striking message of this review, though, is that there is no magic bullet in improving quality in healthcare. Rather, improvement requires multiple approaches, often apparently contradictory: strong leadership alongside a participatory culture; direction and control and also flexibility in implementation according to local need and critical feedback on performance without the attachment of blame.

One challenge is to contain the urge to act, and not ‘crack on’ too quickly, yet at the same time produce encouraging results that sustain enthusiasm and commitment. Extensive development periods are needed to invest in specifying the programme theory, consultation, designing and selecting appropriate measures, setting up data collection systems, winning trust and support and assessing organisational capacity—but at the same time there is a need to avoid inducing a wearying loss of momentum. Tensions can arise if project leaders become carried away by enthusiasm and set goals that are overambitious and be compounded by organisational impatience for quick wins and early results.

A further challenge concerns the need to appeal to multiple audiences; gaining the support of one stakeholder group may mean alienating another. Efforts that go ‘against the grain’ of wider professional, organisational and policy aims are likely to face significant difficulties in realising their ambitions, while interventions seeking to change behaviour without some form of professional endorsement are often on a hiding to nothing.55–57 Improvement interventions are much more likely to succeed when they are developed with, rather than imposed on, healthcare professions. At the frontline, fostering a sense of ownership is crucial. Giving those whose practice will be directly affected a chance to participate in refining the customisable elements of intervention,58 and remaining clear about which elements should remain unaltered, may be very helpful. Engaging many constituencies takes time and energy, but improvement work aligned with the interests of multiple groups and tied into enduring policy foci has a better chance of securing wider influence over time.

Explicit assessments of the effort required by different parties need to be undertaken at an early stage, and participants need to make explicit commitments to deliver on this effort. Senior and executive level buy-in for improvement work needs to be backed up by active support, two-way communication and strategic alignment—and appropriate resources for the task in hand. Formal agreements may be appropriate to ensure that organisational support does not wane. Proper preparation involves careful assessment of the problem that needs fixing, recruitment of key individuals, setting up appropriate measurement systems and consideration of the obstacles likely to be encountered and how to surmount them. Ongoing critical review of improvement efforts is also crucial: new challenges will arise, some approaches will work better than others, and unintended adverse consequences may emerge, to which all involved need to remain alert.

The value of formal evaluations of improvement efforts is clear from these reports. Such evaluations enable both gains and losses in improvement to be treated as learning opportunities and contributions to improvement science. As other evaluations have suggested,59 the Health Foundation reports show that change is hard and slow, but not impossible. Many challenges are deep-set and structural in nature, but whatever their form, recognising their character helps in addressing them. More explicit acknowledgement of the complexity of the challenges facing those improving quality may help to avert disappointment, maximise learning and accelerate future progress.

Acknowledgments

The Health Foundation funded this review and provided helpful comments on earlier drafts of both the full project report and this article. Justin Waring, Diane Ketley, Peter Pronovost, Huw Davies and Glenn Robert also reviewed and gave valuable comments on the full project report. The full project report is available on the Health Foundation's website (www.health.org.uk/overcoming-challenges).

References

Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

    Files in this Data Supplement:

Footnotes

  • Funding The Health Foundation funded this review.

  • Competing interests None.

  • Patient consent Not applicable.

  • Provenance and peer review Not commissioned; internally peer reviewed.