Article Text

This article has a correction. Please see:

Download PDFPDF

The problem with Plan-Do-Study-Act cycles
  1. Julie E Reed1,
  2. Alan J Card2,3
  1. 1NIHR CLAHRC NWL, Imperial College London, London, UK
  2. 2Department of Management, University of Notre Dame, Notre Dame, Indiana, USA
  3. 3Evidence-Based Health Solutions, LLC, Notre Dame, Indiana, USA
  1. Correspondence to Dr Julie E Reed, NIHR CLAHRC NWL, Imperial College London, Chelsea and Westminster Hospital, 369 Fulham Road, London SW10 9NH, UK; julie.reed02{at}imperial.ac.uk

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Quality improvement (QI) methods have been introduced to healthcare to support the delivery of care that is safe, timely, effective, efficient, equitable and cost effective. Of the many QI tools and methods, the Plan-Do-Study-Act (PDSA) cycle is one of the few that focuses on the crux of change, the translation of ideas and intentions into action. As such, the PDSA cycle and the concept of iterative tests of change are central to many QI approaches, including the model for improvement,1 lean,2 six sigma3 and total quality management.4

PDSA provides a structured experimental learning approach to testing changes. Previously, concerns have been raised regarding the fidelity of application of PDSA method, which may undermine learning efforts,5 the complexity of its use in practice5 ,6 and as to the appropriateness of the PDSA method to address the significant challenges of healthcare improvement.7

This article presents our reflections on the full potential of using PDSA in healthcare, but in doing so we explore the inherent complexity and multiple challenges of executing PDSA well. Ultimately, we argue that the problem with PDSA is the oversimplification of the method as it has been translated into healthcare and the failure to invest in a rigorous and tailored application of the approach.

The value of PDSA in healthcare improvement

The purpose of the PDSA method lies in learning as quickly as possible whether an intervention works in a particular setting and to making adjustments accordingly to increase the chances of delivering and sustaining the desired improvement. In contrast to controlled trials, PDSAs allow new learning to be built in to this experimental process. If problems are identified with the original plan, then the theory can be revised to build on this learning and a subsequent experiment conducted to see if it has resolved the problem, and to identify if any further problems also need to be addressed. In the complex social systems of healthcare, this flexibility and adaptability of PDSA are important features that support the adaption of interventions to work in local settings.

A successful PDSA process does not equal a successful QI project or programme. The intended output of PDSA is learning and informed action. Successful application of the PDSA methodology may enable users to achieve their QI goals more efficiently or to reach QI goals they would otherwise not have achieved. But it is also successful if it saves wasted effort by revealing QI goals that cannot be achieved under realistic constraints or if it identifies new problems to tackle instead of the originally identified issue. A well-conducted PDSA promises learning. But it does not, and cannot, promise that users will achieve their desired outcomes.

As PDSA has been translated into healthcare from industrial settings, an emphasis has been placed on rapid small-scale tests of change, often on one, three and then five patients in ‘ramps’ of increasing scale, and responsibility delegated to frontline staff and improvement or quality managers. This pragmatic approach has been embraced and has been seen as providing a new freedom for healthcare staff to lead change and improvement in local care settings.

However, the process of change rarely progresses in simple linear ramps.6 ,8 The conduct of PDSAs can reveal other related issues that need to be addressed in order to achieve the improvement goal. Such issues may relate to minor changes to current practices or processes of care, but can often reveal larger cultural or organisational issues that need to be addressed and overcome.

Recent evaluations have reported on the failure of the PDSA method to help frontline staff address the multiple improvement challenges they faced as the scale of investigation and range of issues they needed to address increased.7 ,9 A report evaluating the Safer Clinical Systems programme in the UK identified ‘the need for clarity about when improvement approaches based on PDSA cycles are appropriate and when they are not’, viewing some challenges as ‘too big and hairy’ for the PDSA method and beyond the scope of small-scale tests of change run by local clinical teams.7

We argue that any improvement situation, no matter how big and hairy, is conducive to application of the PDSA method. The four stages of PDSA mirror the scientific experimental method of formulating a hypothesis, collecting data to test this hypothesis, analysing and interpreting the results and making inferences to iterate the hypothesis.5 ,10

Whether improvement initiatives have been planned at national level to support standardisation of care or planned over a cup of coffee to solve a minor local problem, we believe there will always be a role for PDSA. In moving from planning to implementing a change in practice, PDSA provides a structure for experimental learning to know whether a change has worked or not, and to learn and act upon any new information as a result.

But it is not a magic bullet. Increasingly complex problems require increasingly sophisticated application of the PDSA method, and this is where we believe the problem with the PDSA method lies.

Its simplicity belies its sophistication

One of the main narratives surrounding the use of PDSA in healthcare is that it is easy, and can be applied in practice by anyone. At one level this is true, and the simplicity of the PDSA method and its applicability to many different situations can be viewed as one of its main strengths. However, this simplicity also creates some of the greatest challenges to using PDSA successfully. Users need to understand how to adapt the use of PDSA to address different problems and different stages in the lifecycle of each improvement project. This requires an extensive repertoire of skills and knowledge to be used in conjunction with the basic PDSA model.

One of the main problems encountered in using PDSA is the misperception that it can be used as a standalone method. PDSA needs to be used as part of a suite of QI methods, the exact nature of which may be influenced by the broader methodological approach that is being followed (eg, model for improvement, lean). An important role of the wider methodological approach is to conduct investigations prior to starting the use of PDSA to ensure that the problem is correctly understood and framed. Investigations can include process mapping, failure mode effects analysis, cause and effect analysis, stakeholder engagement and interviews, data analysis and review of existing evidence.

A second misperception is that the PDSA is limited to small-scale tests of change on one, three and five patients. PDSA is an extremely flexible method that can be adapted to support the scale up of interventions and used in conjunction with monitoring activities to support sustainability. But, this flexibility gives rise to a number of key dimensions that require careful consideration. For instance, the scope and scale of change, the amount of preparation prior to use, rigour of the evaluation, time, expertise, management support and funding must be carefully aligned. Often these needs must be rebalanced over the project's lifecycle. If managed well, these adjustments enable the use of PDSA to adapt to new learning and support the design and conduct of ‘tests of change’ as they increase in scale, and often complexity, to achieve the desired improvement goal.

Using PDSA as an iterative design framework to help solve ‘big hairy problems’ or ‘big hairy audacious goals’11 is, therefore, entirely appropriate. In fact, developing solutions to large-scale ‘wicked problems’12 may require ‘an iterative explorative and generative’13 approach of the sort PDSA provides, in which ‘knowledge is built through designing’.13 The key is to understand that this framework will need to be implemented (and resourced) very differently for large and complex problems than for smaller and more ‘tame’ problems. One size does not fit all.

While frontline staff with little training or support may successfully address some quality problems, the complexity of many problems demands greater organisational support, with direct involvement of senior managers to facilitate adequate planning. Projects in which frontline staff must fend for themselves also run the risk of insufficient usage of theory and existing evidence to develop the intervention and a suboptimal evaluation.

Quick (not dirty) tests of change

In healthcare, PDSA training often overemphasises the conceptual simplicity of the framework and underemphasises the different ways in which the method can be adapted to solve increasingly complex problems. This frequently leads people to leap into PDSA with insufficient prior investigation and framing of the problem, to delegate management of the process to frontline staff who have little influence over broader systemic concerns that need to be addressed, and to provide these staff with little support to overcome the obstacles and barriers they face. The resources, skills and expertise required to apply PDSA in the real world are often significantly underestimated, leading to projects that are destined to fail.

This has led to the impression that PDSA cycles involve ‘quick and dirty’ tests of change. In the rush to empower healthcare staff, there is a danger that the scientific rigour of the PDSA method is frequently compromised. A systematic review5 revealed that the core principles of PDSA are often not executed in practice, with ‘substantial variability with which they are designed, executed and reported in the healthcare literature’.6 A failure to properly execute PDSAs can undermine learning efforts… ‘if data collection does not occur frequently enough, if iterative cycles are few, and if system-level changes are not apparent as a result of these cycles, the improvement work is less likely to succeed’.6 While its scientific principles differ from those of controlled trials, rigour in the application of PDSA is still required for PDSA to maximise the learning obtained from tests of change.

In addition to a lack of fidelity with PDSA guiding principles, there is the need to ensure that each stage of the cycle is conducted well. But the frenetic culture endemic in healthcare organisations can make it difficult to achieve sustained engagement in the deliberative processes of PDSA.

Just get on with it

While ‘planning paralysis’ can be an issue in healthcare organisations, the more common problem is a serious underinvestment in the planning phase. The pervasive cultural compulsion to ‘just get on with it’14 leads many teams to move too quickly from ‘plan’ to ‘do.’ The consequences of skipping this up-front work can include wasted PDSA cycles or projects that fail altogether. Table 1 describes some of the key failure modes for the planning and preplanning (ie, investigation and problem-framing) steps of the PDSA process.

Table 1

Key failure modes for the investigation/problem framing and plan steps

Why do planning failures present such a challenge to the successful use of PDSA? It is much more difficult to correctly execute and learn from a plan that has not been well thought out. And even perfect execution cannot ensure success if the plan, itself, is wrong.

The iterative nature of PDSA enables course corrections, but this feature of the approach is much more effective if there was a clear and reasoned course in the first place. Many of the barriers to success in the do, study and act phases can be predicted and mitigated through more effective planning.

Overcoming the prevailing culture of ‘Do, Do, Do’

The structured, reflective practice required for PDSA runs counter to the main mode of operation in healthcare organisations, ‘doing’, with the time required for planning and reflection regarded as a luxury rather than a necessity. As a result, teams often get ‘stuck’ in the ‘do’ phase, failing to progress to the ‘study’ phase. While these problems may reflect poor planning, they may also be caused by problems beyond the control of the project team, such as the challenges of creating time to conduct tests of change, staff turnover and changing or competing priorities. To stop at the ‘do’ phase is to throw away the core contribution of PDSA: its support for iterative design as a way of making improvement interventions more successful.15 Another important but frequently overlooked part of the ‘do’ phase is inductive learning, noticing the unexpected and feeding these observations into the study phase.

Poor planning or conduct of the ‘do’ phase in turn can significantly undermine the ‘study’ phase. In some cases, improvement teams appear to bypass the ‘study’ phase altogether, moving directly from ‘do’ to ‘act’.5 In other cases, the ‘study’ phase may collect insufficient data or may not collect the right type of data to answer questions about the intervention's effectiveness and acceptability. For instance, quantitative data can assess the impact of a given change, without qualitative feedback; the reasons for the results or staff attitudes and ideas about what could be improved will remain unknown. It is also possible that teams draw the wrong conclusions from the data they have collected or fail to notice unanticipated consequences, which may lead to incorrect actions.

Failure to take appropriate action based on what was learned from the ‘study’ phase and previous PDSA cycles is another common concern.5 Inappropriate actions may include adopting or scaling up an intervention that has not proven effective and acceptable,16 or ending a project that has proved successful, or is on track to do so. An important part of the act phase consists of reviewing and revising the theory of how the intervention is intended to achieve its desired impact. This iterative refinement of theory is a key component of PDSA methodology, which is often overlooked in practice.

Effectively managing the PDSA process is about more than individual PDSA steps or cycles. Connecting PDSA cycles together is a messier and far more complicated endeavour than most of the literature on the approach suggests.6 Progression across cycles is seldom linear, and double-loop learning17 may lead to revised goals, as well as revised interventions, and requires significant oversight to manage emergent learning and coordination of PDSA activities over time.

Table 2 describes some of the key failure modes for the execution of the do, study and act steps of the PDSA process.

Table 2

Key failure modes for executing the do, study and act steps

The problem with PDSA: failure to invest in rigorous and tailored application

While the PDSA method is conceptually simple, simple does not mean easy. That said, PDSA is a powerful approach, and projects that make successful use of PDSA can solve specific quality problems and also help shape the culture of healthcare organisations for the better. So, the effort required to apply PDSA successfully has a substantial return on investment. But the resources and supportive context required for success (including funding, methodological expertise, buy-in and sustained effort)18 are often underestimated. Inadequate human resources and financial support doom many projects to fail and also undermine organisational culture, contributing to change fatigue and disillusionment as yet another project produces no real improvement. It is therefore crucial, at both the project level and the programmatic level, that the resource requirements for successful application of PDSA for a given project are well understood and that the process is well managed.

The barriers to ensuring this type of practice in a healthcare culture of ‘just get on with it’ and ‘do, do, do’ are difficult to overcome. To be successful, the use of PDSA must be supported by a significant investment in leadership, expertise and resources for change.

Academia and researchers have a potential role to play to support appropriate rigour of planning and studying and understanding how to manage emergent learning while engaging diverse stakeholder groups. Working in partnership will be beneficial to support effective use of PDSA and is essential to establish genuine learning organisations.19 ,20

References

View Abstract

Footnotes

  • Twitter Follow Julie Reed at @julie4clahrc and Alan Card at @AlanJCard

  • Competing interests None declared.

  • Disclaimer This article presents independent research commissioned by the National Institute for Health Research (NIHR) under the Collaborations for Leadership in Applied Health Research and Care (CLAHRC) programme for North West London. The views expressed in this publication are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health.

  • Provenance and peer review Commissioned; internally peer reviewed.

Linked Articles