Article Text

Download PDFPDF

Building knowledge, asking questions
Free
  1. Greg Ogrinc1,
  2. Kaveh G Shojania2
  1. 1White River Junction VA, Community and Family Medicine, and Medicine, Geisel School of Medicine at Dartmouth, White River Junction, Vermont, USA
  2. 2University of Toronto, Centre for Quality Improvement and Patient Safety (C-QuIPS), Toronto, Ontario, Canada
  1. Correspondence to Dr Greg Ogrinc, White River Junction VA, Quality Scholars, White River Junction, Vermont 05009, USA; greg.ogrinc@va.gov

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

In his recent book Ignorance: how it drives science, Stewart Firestein states, ‘Knowledge is a big subject. Ignorance is bigger.’1 Firestein's book does not explore the ways of knowledge, but the mechanisms by which scientists work to develop and answer questions…which invariably lead to more questions. ‘Not knowing’ is a key driver of research and of quality improvement (QI). While research seeks to create new generalisable knowledge, QI often focuses on improving a specific aspect of healthcare delivery that is not consistently or appropriately implemented in a particular setting. A clinical researcher often asks questions such as, ‘Is X a risk factor for Y?’ or ‘Is treatment A more effective than treatment B?’ Those engaged in QI, by contrast, offer questions such as, ‘Why does routine care delivery fall short of standards we know we can achieve?’ or ‘How can we close this gap between what we know can be achieved and what occurs in practice?’

There are multiple ways to ask questions, answer questions and build knowledge. In healthcare research, specific and well-described methods exist for the design, execution and analysis of defined questions. Questions regarding implementation in a specific setting, integrating evidence into practice, or improving the efficiency of local systems are often best answered using methods that differ from traditional methods of clinical research (eg, controlled clinical trials). While a number of formal methods exist for implementing QI in practice—the model for improvement, Lean or Six Sigma—all advocate the use of small tests of change. Small tests of change enable one to learn how a particular intervention works in a particular setting. The goal of these methods is not to test a hypothesis but rather to gain insight into the workings of a system and improve that system. The most common approach to developing and testing small tests of change is the plan–do–study–act (PDSA) cycle.2

Theory versus reality for PDSA

One of the challenges with PDSA cycles is the substantial variability with which they are designed, executed and reported in the healthcare literature. Taylor et al3 review the reporting of the clarity of PDSA cycles in the literature. They found that fewer than 20% of papers documented a sequence of iterative cycles, and only about 15% of articles reported the use of quantitative data at monthly or more frequent intervals to inform the progression of cycles. The latter point is particularly troubling. Collecting data less frequently than monthly hardly seems like rapid cycle improvement. One core aim of the PDSA method consists of collecting and analysing data weekly, biweekly or monthly, thus allowing a team to identify what works and amplify it, while changing tactics for aspects of the intervention that are not working. Taylor et al show that most QI reports do not report data frequently enough, but it is not clear if the PDSA cycles were completed appropriately as recommended by QI theory or just not summarised appropriately in the articles.

Lecture, textbooks and review articles that teach about PDSA typically depict the cycles as a smooth progression, with each cycle seamlessly and iteratively building on the previous. As the number of cycles increases, their effectiveness and their overall cumulative effect strengthens (figure 1).2 However, those who have engaged in small tests of change quickly recognise that this pristine view of PDSA does not capture reality. As Tomolo et al4 have described, this type of work involves frequent ‘false starts, miss firings, plateaus, regroupings, backsliding, feedback, and overlapping scenarios within the process.’ Far from the commonly shown schematic of perfect circles rolling up the hill of change, they depict a complex tangle of a network in which the changes inexorably move to better performance (figure 2). This complex figure is unlikely to replace the PDSA diagrams that typically introduce the model to novices, but it appropriately captures the numerous starts, stops, backtracking and, often incomplete, cycles of change that occur in practice. It captures the inherent messiness of the work required to improve the quality, safety and value of care within our delivery systems.

Figure 1

Traditional view of successive plan–do–study–act (PDSA) cycles over time depicted as a linear process. Each preceding PDSA informs the next one. As time goes on, the complexity of each intervention and trial often increases.2

Figure 2

Revised conceptual model of plan–do–study–act (PDSA) methodology.4

If this more accurately describes the reality of the PDSA cycles, then perhaps this is too imprecise a method to build knowledge and to share with others? Some may advocate for more structure and precision in the PDSA cycles, so that the improvement methodology is ‘cleaner’ and more akin to research methodology. But, as Firestein states in Ignorance, new technologies and methods are often required to advance knowledge from its current state.1 Answering new questions does not occur by repeatedly using the same methodology (eg, applying the methods of clinical research to QI problems), but by applying methods appropriate for the questions ahead. The clinical trial has proved a remarkably powerful tool in advancing knowledge of effective diagnostic and treatment modalities, but this method is not often appropriate for answering QI questions. PDSA cycles, developed and used in industry for almost a century, are still relatively new in healthcare; so the planning, execution and reporting of PDSA cycles must be improved to share the learning that occurs while undertaking QI work.

Reporting PDSA in the literature

The Standards for QI Reporting Excellence (SQUIRE) guidelines (http://www.squire-statement.org) aimed to reduce uncertainty about information included in scholarly reports about healthcare improvement and thus increase the completeness, precision and transparency of QI in the literature.5 SQUIRE drew on the general principles used in other scientific reporting guidelines such as those for randomised controlled trials and observational studies and for evaluating diagnostic tests.6 Because of the nature of QI work, however, SQUIRE seeks to balance the reporting of changes and the learning that occurs during improvement work with the context and mechanisms that produced the outcomes.

While SQUIRE provides some guidance for describing rapid-cycle, iterative changes, the results reported by Taylor et al3 highlight the need for more direction and clarity about what to report. The SQUIRE guidelines are currently being revised with a clearer focus on describing what was done to make improvements while demonstrating the impact of the improvement. This challenging endeavour addresses some of the deficits found in published QI reports that involved PDSA cycles.

One of the challenges in developing (and revising) SQUIRE consists of describing the development of a complex intervention while also providing the details relevant to demonstrating the impact of that intervention. A tension exists between reporting on the improving of healthcare and the studying of such improvement. ‘Here are all the steps we had to take to improve X’ can easily fill a manuscript, as can, ‘Here are the data that demonstrate our success in improving X’. Authors often struggle with these twin goals, but methods such as the PDSA cycle can help to link the performing of improvement and demonstrating the impact of the improvements.

Planning, doing, analysing and reporting of QI work are inter-related just as they are in clinical research. One would hardly want the first time a trialist encounters the concepts of including a placebo or blinding of participants to treatment status to occur at the writing stage. In other words, guidelines for reporting clinical trials, such as CONSORT, or observational studies, such as STROBE, may also help with planning and conducting them.6 Similarly, the individual items in SQUIRE help not just the reporting of QI work but also its design and execution. For example, SQUIRE recommends reporting aspects of the local context that are relevant to the effectiveness of an intervention by reporting ‘elements of the local care environment considered most likely to influence change/improvement’.5 Reporting such details will assist readers to interpret the results and determine whether similar interventions might work for them in their settings. However, these elements must be appropriately identified and collected prospectively in the QI work in order to determine the impact. These details cannot be retrofitted once the QI work is completed and the writing begins. Determining the success of an intervention should include the aspects of the local context relevant to the theory of the intervention.7

Those engaged in QI work must recognise the proper execution of PDSA cycles as fundamental to their efforts, not just a formality for reporting and publishing. If data collection does not occur frequently enough, if iterative cycles are few, and if system-level changes are not apparent as a result of these cycles, the improvement work is less likely to succeed. It is not just a matter of what to report. When PDSA cycles are carried out appropriately but not clearly reported, it undermines learning, and we fail to share and build the knowledge about the improvement of healthcare. Clarity in the writing and reporting of PDSA cycles brings us one step closer to addressing the problems that arise in healthcare improvement so that we can identify, address and answer more questions in the future.

References

Footnotes

  • Contributors Both authors contributed to the submission of this paper.

  • Competing interests Both authors are editors at BMJ Quality & Safety.

  • Provenance and peer review Internally commissioned and reviewed.

Linked Articles