Article Text
Abstract
Since its publication in 2008, SQUIRE (Standards for Quality Improvement Reporting Excellence) has contributed to the completeness and transparency of reporting of quality improvement work, providing guidance to authors and reviewers of reports on healthcare improvement work. In the interim, enormous growth has occurred in understanding factors that influence the success, and failure, of healthcare improvement efforts. Progress has been particularly strong in three areas: the understanding of the theoretical basis for improvement work; the impact of contextual factors on outcomes; and the development of methodologies for studying improvement work. Consequently, there is now a need to revise the original publication guidelines. To reflect the breadth of knowledge and experience in the field, we solicited input from a wide variety of authors, editors and improvement professionals during the guideline revision process. This Explanation and Elaboration document (E&E) is a companion to the revised SQUIRE guidelines, SQUIRE 2.0. The product of collaboration by an international and interprofessional group of authors, this document provides examples from the published literature, and an explanation of how each reflects the intent of a specific item in SQUIRE. The purpose of the guidelines is to assist authors in writing clearly, precisely and completely about systematic efforts to improve the quality, safety and value of healthcare services. Authors can explore the SQUIRE statement, this E&E and related documents in detail at http://www.squire-statement.org.
- Health services research
- Implementation science
- Quality improvement
- Quality improvement methodologies
- Statistical process control
This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/
Statistics from Altmetric.com
- Health services research
- Implementation science
- Quality improvement
- Quality improvement methodologies
- Statistical process control
Background
The past two decades have seen a proliferation in the number and scope of reporting guidelines in the biomedical literature. The SQUIRE (Standards for QUality Improvement Reporting Excellence) guidelines are intended as a guide to authors reporting on systematic, data-driven efforts to improve the quality, safety and value of healthcare. SQUIRE was designed to increase the completeness and transparency of reporting of quality improvement work, and since its publication in 2008,1 has contributed to the development of this body of literature by providing a guide to authors, editors, reviewers, educators and other stakeholders.
An Explanation and Elaboration document (E&E) was published in 2008 alongside the original SQUIRE guidelines, which we will refer to as SQUIRE 1.0.1 The goal of the E&E was to help authors interpret the guidelines by explaining the rationale behind each item, and providing examples of how the items might be used. The concurrent publication of an explanatory document is consistent with the practices used in developing and disseminating other scientific reporting guidelines.2–5
Evolution of the field
The publication of guidelines for quality improvement reports (QIRs) in 19996 laid the foundation for reporting about systematic efforts to improve the quality, safety and value of healthcare. A goal of the QIRs was to share and promote good practice through brief descriptions of quality improvement projects. The publication of SQUIRE 1.0 represented a transition from primarily reporting outcomes to the reporting of both what was done to improve healthcare and the study of that work. SQUIRE guided authors in describing the design and impact of an intervention, and how the intervention was implemented and the methods used to assess the internal and external validity of the study's findings, among other details.1
Since the publication of SQUIRE 1.0, enormous progress has occurred in understanding what influences the success (or lack of success) in healthcare improvement efforts. Scholarly publications have described the importance of theory in healthcare improvement work, as well as the adaptive interaction between interventions and contextual elements, and a variety of designs for drawing inferences from that work.7–12 Despite considerable progress in understanding, managing and studying these areas, much work remains to be done. The need for publication guidelines that can assist authors in writing transparently and completely about improvement work is at least as important now as it was in 2008.
SQUIRE 1.0 has been revised to reflect this progress in the field through an iterative process which has been described in detail elsewhere.13 ,14 The goal of this revision was to make SQUIRE simpler, clearer, more precise, easier to use and even more relevant to a wide range of approaches to improving healthcare. To this end, a diverse group of stakeholders came together to develop the content and format of the revised guidelines. The revision process included an evaluation of SQUIRE 1.0,13 two consensus conferences and pilot testing of a draft of the guidelines, ending with a public comment period to engage potential SQUIRE users not already involved in writing about quality improvement (ie, students, fellows and front-line staff engaged in improvement work outside of academic medical centres).
Using the SQUIRE explanation and elaboration document
This Explanation and Elaboration document is designed to support authors in the use of the revised SQUIRE guidelines by providing representative examples of high-quality reporting of SQUIRE 2.0 content items, followed by analysis of each item, and consideration of the features of the chosen example that are consistent with the item's intent (table 1). Each sequential sub-subsection of this E&E document was written by a contributing author or authors, chosen for their expertise in that area. Contributors are from a variety of disciplines and professional backgrounds, reflecting a wide range of knowledge and experience in healthcare systems in Sweden, the UK, Canada and the USA.
SQUIRE 2.0 is intended for reports that describe systematic work to improve the quality, safety and value of healthcare, using a range of methods to establish the association between observed outcomes and intervention(s). SQUIRE 2.0 applies to the reporting of qualitative and quantitative evaluations of the nature and impact of interventions intended to improve healthcare, with the understanding that the guidelines may be adapted as needed for specific situations. When appropriate, SQUIRE can, and should, be used in conjunction with other publication guidelines such as TIDieR guidelines (Template for Intervention Description and Replication Checklist and Guide).2 While we recommend that authors consider every SQUIRE item in the writing process, some items may not be relevant for inclusion in a particular manuscript. The addition of a glossary of key terms, linked to SQUIRE and the E&E, and interactive electronic resources (http://www.squire-statement.org), provide further opportunity to engage with SQUIRE 2.0 on a variety of levels.
Explanation and elaboration of SQUIRE guideline items
Title and abstract
Title
Indicate that the manuscript concerns an initiative to improve healthcare (broadly defined to include the quality, safety, effectiveness, patient-centredness, timeliness, cost, efficiency, and equity of healthcare, or access to it).
Example 1Reducing post-caesarean surgical wound infection rate: an improvement project in a Norwegian maternity clinic.15
Example 2Large scale organizational intervention to improve patient safety in four UK hospitals: mixed method evaluation.16
Explanation
The title of a healthcare improvement report should indicate that it is about an initiative to improve safety, value and/or quality in healthcare, and should describe the aim of the project and the context in which it occurred. Because the title of a paper provides the first introduction of the work, it should be both descriptive and simply written to invite the reader to learn more about the project. Both examples given above do this well.
Authors should consider using terms which allow the reader to identify easily that the project is within the field of healthcare improvement, and/or state this explicitly as in the examples above. This information also facilitates the correct assignment of medical subject headings (MeSH) in the National Library of Medicine's Medline database. In 2015, healthcare improvement-related MeSH terms include: Health Care Quality Access and Evaluation; Quality Assurance; Quality Improvement; Outcome and Process Assessment (Healthcare); Quality Indicators, Health Care; Total Quality Management; Safety Management (http://www.nlm.nih.gov/mesh/MBrowser.html). Sample keywords which might be used in connection with improvement work include Quality, Safety, Evidence, Efficacy, Effectiveness, Theory, Interventions, Improvement, Outcomes, Processes and Value.
Abstract
Provide adequate information to aid in searching and indexing
Summarise all key information from various sections of the text using the abstract format of the intended publication or a structured summary such as: background, local problem, methods, interventions, results, conclusions
ExampleBackground: Pain assessment documentation was inadequate because of the use of a subjective pain assessment strategy in a tertiary level IV neonatal intensive care unit [NICU]. The aim of this study was to improve consistency of pain assessment documentation through implementation of a multidimensional neonatal pain and sedation assessment tool. The study was set in a 60-bed level IV NICU within an urban children's hospital. Participants included NICU staff, including registered nurses, neonatal nurse practitioners, clinical nurse specialists, pharmacists, neonatal fellows, and neonatologists.
Methods: The Plan Do Study Act method of quality improvement was used for this project. Baseline assessment included review of patient medical records 6 months before the intervention. Documentation of pain assessment on admission, routine pain assessment, reassessment of pain after an elevated pain score, discussion of pain in multidisciplinary rounds, and documentation of pain assessment were reviewed. Literature review and listserv query were conducted to identify neonatal pain tools.
Intervention: Survey of staff was conducted to evaluate knowledge of neonatal pain and also to determine current healthcare providers’ practice as related to identification and treatment of neonatal pain. A multidimensional neonatal pain tool, the Neonatal Pain, Agitation, and Sedation Scale [N-PASS], was chosen by the staff for implementation.
Results: Six months and 2 years following education on the use of the N-PASS and implementation in the NICU, a chart review of all hospitalized patients was conducted to evaluate documentation of pain assessment on admission, routine pain assessment, reassessment of pain after an elevated pain score, discussion of pain in multidisciplinary rounds, and documentation of pain assessment in the medical progress note. Documentation of pain scores improved from 60% to 100% at 6 months and remained at 99% 2 years following implementation of the N-PASS. Pain score documentation with ongoing nursing assessment improved from 55% to greater than 90% at 6 months and 2 years following the intervention. Pain assessment documentation following intervention of an elevated pain score was 0% before implementation of the N-PASS and improved slightly to 30% 6 months and 47% 2 years following implementation.
Conclusions: Identification and implementation of a multidimensional neonatal pain assessment tool, the N-PASS, improved documentation of pain in our unit. Although improvement in all quality improvement monitors was noted, additional work is needed in several key areas, specifically documentation of reassessment of pain following an intervention for an elevated pain score.
Keywords: N-PASS, neonatal pain, pain scores, quality improvement,17
Explanation
The purpose of an abstract is twofold. First, to summarise all key information from various sections of the text using the abstract format of the intended publication or a structured summary of the background, specific problem to be addressed, methods, interventions, results, conclusions, and second, to provide adequate information to aid in searching and indexing.
The abstract is meant to be both descriptive, indicating the purpose, methods and scope of the initiative, and informative, including the results, conclusions and recommendations. It needs to contain sufficient information about the article to allow a reader to quickly decide if it is relevant to their work and if they wish to read the full-length article. Additionally, many online databases such as Ovid and CINAHL use abstracts to index the article so it is important to include keywords and phrases that will allow for quick retrieval in a literature search. The example given includes these.
Journals have varying requirements for the format, content length and structure of an abstract. The above example illustrates how the important components of an abstract can be effectively incorporated in a structured abstract. It is clear that it is a healthcare improvement project. Some background information is provided, including a brief description of the setting and the participants, and the aim/objective is clearly stated. The methods section describes the strategies used for the interventions, and the results section includes data that delineates the impact of the changes. The conclusion section provides a succinct summary of the project, what led to its success and lessons learned. This abstract is descriptive and informative, allowing readers to determine whether they wish to investigate the article further.
Introduction
Problem description
Nature and significance of the local problem.
Available knowledge
Summary of what is currently known about the problem, including relevant previous studies.
ExampleCentral venous access devices place patients at risk for bacterial entry into the bloodstream, facilitate systemic spread, and contribute to the development of sepsis. Rapid recognition and antibiotic intervention in these patients, when febrile, are critical. Delays in time to antibiotic [TTA] delivery have been correlated with poor outcomes in febrile neutropenic patients.2 TTA was identified as a measure of quality of care in pediatric oncology centers, and a survey reported that most centers used a benchmark of <60 minutes after arrival, with >75% of pediatric cancer clinics having a mean TTA of <60 minutes…
The University of North Carolina [UNC] Hospitals ED provides care for ∼65 000 patients annually, including 14 000 pediatric patients aged, 19 years. Acute management of ambulatory patients who have central lines and fever often occurs in the ED. Examination of a 10-month sample revealed that only 63% of patients received antibiotics within 60 minutes of arrival … 18
Explanation
The introduction section of a quality improvement article clearly identifies the current relevant evidence, the best practice standard based on the current evidence and the gap in quality. A quality gap describes the difference between practice at the local level and the achievable evidence-based standard. The authors of this article describe the problem and identify the quality gap by stating that “Examination of a 10-month sample revealed only 63% of the patients received antibiotics within 60 minutes of arrival and that the benchmark of <60 minutes and that delays in delivering antibiotics led to poorer outcomes.”18 The timing of antibiotic administration at the national level compared with the local level provides an achievable standard of care, which helps the authors determine the goal for their antibiotic administration improvement project.
Providing a summary of the relevant evidence and what is known about the problem provides background and support for the improvement project and increases the likelihood for sustainable success. The contextual information provided by describing the local system clarifies the project and reflects upon how suboptimal care with antibiotic administration negatively impacts quality. Missed diagnoses, delayed treatments, increased morbidity and increased costs are associated with a lack of quality, having relevance and implications at both the local and national levels.
Improvement work can also be done on a national or regional level. In this case, the term ‘local’ in the SQUIRE guidelines should be interpreted more generally as the specific problem to be addressed. For example, Murphy et al describe a national initiative addressing a healthcare quality issue.19 The introduction section in this article also illuminates current relevant evidence, best practice based on the current evidence, and the gap in quality. However, the quality gap reported here is the difference in knowledge of statin use for patients at high risk of cardiovascular morbidity and mortality in Ireland compared with European clinical guidelines: “Despite strong evidence and clinical guidelines recommending the use of statins for secondary prevention, a gap exists between guidelines and practice … A policy response that strengthens secondary prevention, and improves risk assessment and shared decision-making in the primary prevention of CVD [cardiovascular disease] is required.”19
Improvement work can also address a gap in knowledge, rather than quality. For example, work might be done to develop tools to assess patient experience for quality improvement purposes.20 Interventions to improve patient experience, or to enhance team communication about patient safety21 may also address quality problems, but in the absence of an established, evidence-based standard.
Rationale
Informal or formal frameworks, models, concepts, and/or theories used to explain the problem, any reasons or assumptions that were used to develop the intervention(s), and reasons why the intervention(s) was expected to work.
Example 1The team used a variety of qualitative methods …to understand sociotechnical barriers. At each step of collection, we categorised data according to the FITT [‘Fit between Individuals, Task, and Technology’]model criteria … Each component of the activity system [ie, user, task and technology] was clearly defined and each interface between components was explored by drawing from several epistemological disciplines including the social and cognitive sciences. The team designed interventions to address each identified FITT barrier……. By striving to understand the barriers affecting activity system components and the interfaces between them, we were able to develop a plan that addressed user needs, implement an intervention that articulated with workflow, study the contextual determinants of performance, and act in alignment with stakeholder expectations.22
Example 2…We describe the development of an intervention to improve medication management in multimorbidity by general practitioners (GPs), in which we applied the steps of the BCW[Behaviour Change Wheel]23 to enable a more transparent implementation of the MRC [Medical Research Council] framework for design and evaluation of complex interventions….
…we used the COM-B [capability, opportunity, motivation—behaviour] model to develop a theoretical understanding of the target behaviour and guide our choice of intervention functions. We used the COM-B model to frame our qualitative behavioural analysis of the qualitative synthesis and interview data. We coded empirical data relevant to GPs’ …capabilities, …opportunities and …motivations to highlight why GPs were or were not engaging in the target behaviour and what needed to change for the target behaviour to be achieved.
The BCW incorporates a comprehensive panel of nine intervention functions, shown in figure 1, which were drawn from a synthesis of 19 frameworks of behavioural-intervention strategies. We determined which intervention functions would be most likely to effect behavioural change in our intervention by mapping the individual components of the COM-B behavioural analysis onto the published BCW linkage matrices…24
Explanation
The label ‘rationale’ for this guideline item refers to the reasons the authors have for expecting that an intervention will ‘work.’ A rationale is always present in the heads of researchers; however, it is important to make this explicit and communicate it in healthcare quality improvement work. Without this, learning from empirical studies may be limited and opportunities for accumulating and synthesising knowledge across studies restricted.8
Authors can express a rationale in a variety of ways, and in more than one way in a specific paper. These include providing an explanation, specifying underlying principles, hypothesising processes or mechanism of change, or producing a logic model (often in the form of a diagram) or a programme theory. The rationale may draw on a specific theory with clear causal links between constructs or on a general framework which indicates potential mechanisms of change that an intervention could target.
A well developed rationale allows the possibility of evaluating not just whether the intervention had an effect, but how it had that effect. This provides a basis for understanding the mechanisms of action of the intervention, and how it is likely to vary across, for example, populations, settings and targets. An explicit rationale leads to specific hypotheses about mechanisms and/or variation, and testing these hypotheses provides valuable new knowledge, whether or not they are supported. This knowledge lays the foundation for optimising the intervention, accumulating evidence about mechanisms and variation, and advancing theoretical understanding of interventions in general.
The first example shows how a theory (the ‘Fit between Individuals, Task and Technology’ framework) can identify and clarify the social and technological barriers to healthcare improvement work. The study investigated engagement with a computerised system to support decisions about postoperative deep vein thrombosis (DVT) prophylaxis: use of the framework led to 11 distinct barriers being identified, each associated with a clearly specified intervention which was undertaken.
The second example illustrates the use of an integrative theoretical framework for intervention development.25 The authors used an integrative framework rather than a specific theory/model/framework. This was in order to start with as comprehensive a framework as possible, since many theories of behaviour change are partial. This example provides a clear description of the framework and how analysing the target behaviour using an integrative theoretical model informed the selection of intervention content.
Interventions may be effective without the effects being brought about by changes identified in the hypothesised mechanisms; on the other hand, they may activate the hypothesised mechanisms without changing behaviour. The knowledge gained through a theory-based evaluation is essential for understanding processes of change and, hence, for developing more effective interventions. This paper also cited evidence for, and examples of, the utility of the framework in other contexts.
Specific aims
Purpose of the project and of this report
ExampleThe collaborative quality improvement [QI] project described in this article was conducted to determine whether care to prevent postoperative respiratory failure as addressed by PSI 11 [Patient Safety Indicator #11, a national quality indicator] could be improved in a Virtual Breakthrough Series [VBTS] collaborative…..26
Explanation
The specific aim of a project describes why it was conducted, and the goal of the report. It is essential to state the aims of improvement work clearly, completely and precisely. Specific aims should align with the nature and significance of the problem, the gap in quality, safety and value identified in the introduction, and reflect the rationale for the intervention(s). The example given makes it clear that the goal of this multisite initiative was to improve or reduce postoperative respiratory failure by using a virtual breakthrough series.
When appropriate, the specific aims section of a report about healthcare improvement work should state that both process and outcomes will be assessed. Focusing only on assessment of outcomes ignores the possibility that clinicians may not have adopted the desired practice, or did not adopt it effectively, during the study period. Changing care delivery is the foundation of improvement work and should also be measured and reported. In the subsequent methods section, the example presented here also describes the process measures used to evaluate the VBS.
Methods
Context
Contextual elements considered important at the outset of introducing the intervention(s)
Example 1CCHMC [Cincinnati Children's Hospital Medical Center]is a large, urban pediatric medical center and the Bone Marrow Transplant [BMT] team performs 100 to 110 transplants per year. The BMT unit contains 24 beds and 60–70% of the patients on the floor are on cardiac monitors…The clinical providers…include 14 BMT attending physicians, 15 fellows, 7 NPs [nurse practitioners], and 6 hospitalists…The BMT unit employs ∼130 bedside RNs [registered nurses] and 30 PCAs[patient care assistants]. Family members take an active role…27
Example 2Pediatric primary care practices were recruited through the AAP QuIIN [American Academy of Pediatrics Quality Improvement Innovation Network] and the Academic Pediatric Association's Continuity Research Network. Applicants were told that Maintenance of Certification [MOC] Part 4 had been applied for, but was not assured. Applicant practices provided information on their location, size, practice type, practice setting, patient population and experience with quality improvement [QI] and identified a 3-member physician-led core improvement team. …. Practices were selected to represent diversity in practice types, practice settings, and patient populations. In each selected practice the lead core team physician and in some cases the whole practice had previous QI experience…table 1 summarizes practice characteristics for the 21 project teams.28
Explanation
Context is known to affect the process and outcome of interventions to improve the quality of healthcare.29 This section of a report should describe the contextual factors that authors considered important at the outset of the improvement initiative. The goal of including information on context is twofold. First, describing the context in which the initiative took place is necessary to assist readers in understanding whether the intervention is likely to ‘work’ in their local environment, and, more broadly, the generalisability of the finding. Second, it enables the researchers to examine the role of context as a moderator of successful intervention(s). Specific and relevant elements of context thought to optimise the likelihood of success should be addressed in the design of the intervention, and plans should be made a priori to measure these factors and examine how they interact with the success of the intervention.
Describing the context within the methods section orients the reader to where the initiative occurred. In single-centre studies, this description usually includes information about the location, patient population, size, staffing, practice type, teaching status, system affiliation and relevant processes in place at the start of the intervention, as is demonstrated in the first example by Dandoy et al27 reporting a QI effort to reduce monitor alarms. Similar information is also provided in aggregate for multicentre studies. In the second example by Duncan et al,28 a table is used to describe the practice characteristics of the 21 participating paediatric primary care practices, and includes information on practice type, practice setting, practice size, patient characteristics and use of an electronic health record. This information can be used by the reader to assess whether his or her own practice setting is similar enough to the practices included in this report to enable extrapolation of the results. The authors state that they selected practices to achieve diversity in these key contextual factors. This was likely done so that the team could assess the effectiveness of the interventions in a range of settings and increase the generalisability of the findings.
Any contextual factors believed a priori would impact the success of their intervention should be specifically discussed in this section. Although the authors' rationale is not explicitly stated, the example suggests that they had specific hypotheses about key aspects of a practice's context that would impact implementation of the interventions. They addressed these contextual factors in the design of their study in order to increase the likelihood that the intervention would be successful. For example, they stated specifically that they selected practices with previous healthcare improvement experience and strong physician leadership. In addition, the authors noted that practices were recruited through an existing research consortium, indicating their belief that project sponsorship by an established external network could impact success of the initiative. They also noted that practices were made aware that American Board of Pediatrics Maintenance of Certification Part 4 credit had been applied for but not assured, implying that the authors believed incentives could impact project success. While addressing context in the design of the intervention may increase the likelihood of success, these choices limit the generalisability of the findings to other similar practices with prior healthcare improvement experience, strong physician leadership and available incentives.
This example could have been strengthened by using a published framework such as the Model for Understanding Success in Quality (MUSIQ),10 Consolidated Framework for Implementation Research (CFIR),29or the Promoting Action on Research Implementation in Health Services (PARiHS) model30 to identify the subset of relevant contextual factors that would be examined.10 ,11 The use of such frameworks is not a requirement but a helpful option for approaching the issue of context. The relevance of any particular framework can be determined by authors based on the focus of their work—MUSIQ was developed specifically for microsystem or organisational QI efforts, whereas CFIR and PARiHS were developed more broadly to examine implementation of evidence or other innovations.
If elements of context are hypothesised to be important, but are not going to be addressed specifically in the design of the intervention, plans to measure these contextual factors prospectively should be made during the study design phase. In these cases, measurement of contextual factors should be clearly described in the methods section, data about how contextual factors interacted with the interventions should be included in the results section, and the implications of these findings should be explored in the discussion. For example, if the authors of the examples above had chosen this approach, they would have measured participating team's’ prior healthcare improvement experience and looked for differences in successful implementation based on whether practices had prior experience or not. In cases where context was not addressed prospectively, authors are still encouraged to explore the impact of context on the results of intervention(s) in the discussion section.
Intervention(s)
Description of the intervention(s) in sufficient detail that others could reproduce it
Specifics of the team involved in the work
Example 1We developed the I-PASS Handoff Bundle through an iterative process based on the best evidence from the literature, our previous experience, and our previously published conceptual model. The I-PASS Handoff Bundle included the following seven elements: the I-PASS mnemonic, which served as an anchoring component for oral and written handoffs and all aspects of the curriculum; a 2-hour workshop [to teach TeamSTEPPS teamwork and communication skills, as well as I-PASS handoff techniques], which was highly rated; a 1-hour role-playing and simulation session for practicing skills from the workshop; a computer module to allow for independent learning; a faculty development program; direct-observation tools used by faculty to provide feedback to residents; and a process-change and culture-change campaign, which included a logo, posters, and other materials to ensure program adoption and sustainability. A detailed description of all curricular elements and the I-PASS mnemonic have been published elsewhere and are provided in online supplementary appendix table, available with the full text of this article at NEJM.org. I-PASS is copyrighted by Boston Children's Hospital, but all materials are freely available.
Each site integrated the I-PASS structure into oral and written handoff processes; an oral handoff and a written handoff were expected for every patient. Written handoff tools with a standardized I-PASS format were built into the electronic medical record programs [at seven sites] or word-processing programs [at two sites]. Each site also maintained an implementation log that was reviewed regularly to ensure adherence to each component of the handoff program.21
Example 2All HCWs [healthcare workers] on the study units, including physicians, nurses and allied health professionals, were invited to participate in the overall study of the RTLS [real-time location system] through presentations by study personnel. Posters describing the RTLS and the study were also displayed on the participating units… Auditors wore white lab coats as per usual hospital practice and were not specifically identified as auditors but may have been recognisable to some HCWs. Auditors were blinded to the study hypothesis and conducted audits in accordance with the Ontario Just Clean Your Hands programme.31
Explanation
In the same way that reports of basic science experiments provide precise details about the quantity, specifications and usage of reagents, equipment, chemicals and materials needed to run an experiment, so too should the description of the healthcare improvement intervention include or reference enough detail that others could reproduce it. Improvement efforts are rarely unimodal and descriptions of each component of the intervention should be included. For additional guidance regarding the reporting of interventions, readers are encouraged to review the TIDieR guidelines: http://www.ncbi.nlm.nih.gov/pubmed/24609605.
In the first example above21 about the multisite I-PASS study to improve paediatric handoff safety, the authors describe seven different elements of the intervention, including a standardised mnemonic, several educational programmes, a faculty development programme, observation/feedback tools and even the publicity materials used to promote the intervention. Every change that could have contributed to the observed outcome is noted. Each element is briefly described and a reference to a more detailed description provided so that interested readers can seek more information. In this fashion, complete information about the intervention is made available, yet the full details do not overwhelm this report. Note that not all references are to peer-reviewed literature as some are to curricular materials in the website MedEd Portal (https://www.mededportal.org), and others are to online materials.
The online supplementary appendix available with this report summarises key elements of each component which is another option to make details available to readers. The authors were careful to note situations in which the intervention differed across sites. At two sites the written handoff tool was built into word-processing programmes, not the electronic medical record. Since interventions are often unevenly applied or taken up, variation in the application of intervention components across units, sites or clinicians is reported in this section where applicable.
The characteristics of the team that conducted the intervention (for instance, type and level of training, degree of experience, and administrative and/or academic position of the personnel leading workshops) and/or the personnel to whom the intervention was applied should be specified. Often the influence of the people involved in the project is as great as the project components themselves. The second example above,31 from an elegant study of the Hawthorne effect on hand hygiene rates, succinctly describes both the staff that were being studied and characteristics of the intervention personnel: the auditors tracking hand hygiene rates.
Study of the intervention
Approach chosen for assessing the impact of the intervention(s)
Approach used to establish whether the observed outcomes were due to the intervention(s)
Example 1 The nonparametric Wilcoxon-Mann-Whitney test was used to determine differences in OR use among Radboud UMC [University Medical Centre] and the six control UMCs together as a group. To measure the influence of the implementation of new regulations about cross functional teams in May 2012 in Radboud UMC, a [quasi-experimental] time-series design was applied and multiple time periods before and after this intervention were evaluated.32
Example 2To measure the perceptions of the intervention on patients and families and its effect on transition outcomes, a survey was administered in the paediatric cystic fibrosis clinic at the start of the quality improvement intervention and 18 months after the rollout process. The survey included closed questions on demographics and the transition materials [usefulness of guide and notebook, actual use of notebook and guide, which specific notebook components were used in clinic and at home]. We also elicited open-ended feedback…..
A retrospective chart review assessed the ways patients transferred from the paediatric to adult clinic before and after the transition programme started. In addition, we evaluated differences in BMI [body mass index] and hospitalizations 1 year after transfer to the adult centre.33
Explanation
Broadly, the study of the intervention is the reflection upon the work that was done, its effects on the systems and people involved, and an assessment of the internal and external validity of the intervention. Addressing this item will be greatly facilitated by the presence of a strong rationale, because when authors are clear about why they thought an intervention should work, the path to assessing the what, when, why and how of success or failure becomes easier.
The study of the intervention may at least partly (but not only) be accomplished through the study design used. For example, a stepped wedge design or comparison control group can be used to study the effects of the intervention. Other examples of ways to study the intervention include, but are not limited to, stakeholder satisfaction surveys around the intervention, focus groups or interviews with involved personnel, evaluations of the fidelity of implementation of an intervention, or estimation of unintended effects through specific analyses. . The aims and methods for this portion of the work should be clearly specified. The authors should indicate whether these evaluative techniques were performed by the authors themselves, or an outside team, and what the relationship was between the authors and the evaluators. The timing of the ‘study of the intervention’ activities relative to the intervention should be indicated.
In the first example,32 the cross-functional team study, the goal was to improve utilisation of operating room time by having a multidisciplinary, interprofessional group proactively manage the operating room schedule. This project used a prespecified study design to study an intervention, including an intervention and a control group. They assessed whether the observed outcomes were due to the intervention or some other cause (internal validity) by comparing operating room utilisation over time at the intervention site to utilisation at the control site. They understood the possible confounding effects of system-wide changes to operating room policies, and planned their analysis to account for this by using a quasi-experimental time series design. The authors used statistical results to determine the validity of their findings, suggesting that the decrease in variation in use was indicative of organisational learning.
In a subsequent section of this report, the authors also outlined an evaluation they performed to make sure that improved efficiency of operating room was not associated with adverse changes in operative mortality or complication rates. This is an example of how an assessment of unintended impact of the intervention—an important component of studying the intervention—might be completed. An additional way to assess impact in this particular study might have been to obtain information from staff on their impressions of the programme, or to assess how cross-functional teams were implemented at this particular site.
In the second example,33 a programme to improve the transition from paediatric to adult cystic fibrosis care was implemented and evaluated. The authors used a robust theoretical framework to help develop their work in this area, and its presence supported their evaluative design by showing whose feedback would be needed in order to determine success: healthcare providers, patients and their families. In this paper, the development of the intervention incorporated the principle of studying it through PDSA cycles, which were briefly reported to give the reader a sense of the validity of the intervention. Outcomes of the intervention were assessed by testing how patients' physical parameters changed over time before and after the intervention. To test whether these changes were likely to be related to the implementation of the new transition programme, patients and families were asked to complete a survey, which demonstrated the overall utility of the intervention to the target audience of families and patients. The survey also helped support the assertion that the intervention was the reason patient outcomes improved by testing whether people actually used the intervention materials as intended.
Measures
Measures chosen for studying processes and outcomes of the intervention(s), including rationale for choosing them, their operational definitions, and their validity and reliability
Description of the approach to the ongoing assessment of contextual elements that contributed to the success, failure, efficiency, and cost of the improvement
Methods employed for assessing completeness and accuracy of data
ExampleImprovement in culture of safety and ‘transformative’ effects—Before and after surveys of staff attitudes in control and SPI1[the Safer Patients Initiative, phase 1] hospitals were conducted by means of a validated questionnaire to assess staff morale, attitudes, and aspects of culture [the NHS National Staff Survey]…
Impact on processes of clinical care—To identify any improvements, we measured error rates in control and SPI1 hospitals by means of explicit [criterion based] and separate holistic reviews of case notes. The study group comprised patients aged 65 or over who had been admitted with acute respiratory disease: this is a high risk group to whom many evidence based guidelines apply and hence where significant effects were plausible.
Improving outcomes of care—We reviewed case notes to identify adverse events and mortality and assessed any improvement in patients' experiences by using a validated measure of patients' satisfaction [the NHS patient survey]…
To control for any learning or fatigue effects, or both, in reviewers, case notes were scrambled to ensure that they were not reviewed entirely in series. Agreement on prescribing error between observers was evaluated by assigning one in 10 sets of case notes to both reviewers, who assessed cases in batches, blinded to each other's assessments, but compared and discussed results after each batch.16
Explanation
Studies of healthcare improvement should document both planned and actual changes to the structure and/or process of care, and the resulting intended and/or unintended (desired or undesired) changes in the outcome(s) of interest.34 While measurement is inherently reductionistic, those evaluating the work can provide a rich view by combining multiple perspectives through measures of clinical, functional, experiential, and cost outcome dimensions.35–37
Measures may be routinely used to assess healthcare processes or designed specifically to characterise the application of the intervention in the clinical process. Either way, evaluators also need to consider the influence of contextual factors on the improvement effort and its outcomes.7 ,38 ,39 This can be accomplished through a mixed method design which combines data from quantitative measurement, qualitative interviews and ethnographical observation.40–43 In the study described above, triangulation of complementary data sources offers a rich picture of the phenomena under study, and strengthens confidence in the inferences drawn.
The choice of measures and type of data used will depend on the particular nature of the initiative under study, on data availability, feasibility considerations and resource constraints. The trustworthiness of the study will benefit from insightful reporting of the choice of measures and the rationale for choosing them. For example, in assessing ‘staff morale, attitudes, and aspects of ‘culture’ that might be affected’ by the SPI1, the evaluators selected the 11 most relevant of the 28 survey questions in the NHS Staff Survey questionnaire and provided references to detailed documentation for that instrument. To assess patient safety, the authors' approach to reviewing case notes ‘was both explicit (criterion based) and implicit (holistic) because each method identifies a different spectrum of errors’.16
Ideally, measures would be perfectly valid, reliable, and employed in research with complete and accurate data. In practice, such perfection is impossible.42 Readers will benefit from reports of the methods employed for assessing the completeness and accuracy of data, so they can critically appraise the data and the inferences made from it.
Analysis
Qualitative and quantitative methods used to draw inferences from the data
Methods for understanding variation within the data, including the effects of time as a variable
Example 1We used statistical process control with our primary process measure of family activated METs [Medical Emergency Teams] displayed on a u-chart. We used established rules for differentiating special versus common cause variation for this chart. We next calculated the proportion of family-activated versus clinician-activated METs which was associated with transfer to the ICU within 4 h of activation. We compared these proportions using χ2 tests.44
Example 2The CDMC [Saskatchewan Chronic Disease Management Collaborative] did not establish a stable baseline upon which to test improvement; therefore, we used line graphs to examine variation occurring at the aggregate level [data for all practices combined] and linear regression analysis to test for statistically significant slope [alpha=0.05]. We used small multiples, rational ordering and rational subgrouping to examine differences in the level and rate of improvement between practices.
We examined line graphs for each measure at the practice level using a graphical analysis technique called small multiples. Small multiples repeat the same graphical design structure for each ‘slice’ of the data; in this case, we examined the same measure, plotted on the same scale, for all 33 practices simultaneously in one graphic. The constant design allowed us to focus on patterns in the data, rather than the details of the graphs. Analysis of this chart was subjective; the authors examined it visually and noted, as a group, any qualitative differences and unusual patterns.
To examine these patterns quantitatively, we used a rational subgrouping chart to plot the average month to month improvement for each practice on an Xbar-S chart.45
Example 3Key informant interviews were conducted with staff from 12 community hospital ICUs that participated in a cluster randomized control trial [RCT] of a QI intervention using a collaborative approach. Data analysis followed the standard procedure for grounded theory. Analyses were conducted using a constant comparative approach. A coding framework was developed by the lead investigator and compared with a secondary analysis by a coinvestigator to ensure logic and breadth. As there was close agreement for the basic themes and coding decisions, all interviews were then coded to determine recurrent themes and the relationships between themes. In addition, ‘deviant’ or ‘negative’ cases [events or themes that ran counter to emerging propositions] were noted. To ensure that the analyses were systematic and valid, several common qualitative techniques were employed including consistent use of the interview guide, audiotaping and independent transcription of the interview data, double coding and analysis of the data and triangulation of investigator memos to track the course of analytic decisions.46
Explanation
Various types of problems addressed by healthcare improvement efforts may make certain types of solutions more or less effective. Not every problem can be solved with one method––yet a problem often suggests its own best solution strategy. Similarly, the analytical strategy described in a report should align with the rationale, project aims and data constraints. Many approaches are available to help analyse healthcare improvement, including qualitative approaches (eg, fishbone diagrams in root cause analysis, structured interviews with patients/families, Gemba walks) or quantitative approaches (eg, time series analysis, traditional parametrical and non-parametrical testing between groups, logistic regression). Often the most effective analytical approach occurs when quantitative and qualitative data are used together. Examples of this might include value stream mapping where a process is graphically outlined with quantitative cycle times denoted; or a spaghetti map linking geography to quantitative physical movements; or annotations on a statistical process control (SPC) chart to allow for temporal insights between time series data and changes in system contexts.
In the first example by Brady et al,44 family activated medical emergency teams (MET) are evaluated. The combination of three methods—statistical process control, a Pareto chart and χ2 testing—makes for an effective and efficient analysis. The choice of analytical methods is described clearly and concisely. The reader knows what to expect in the results sections and why these methods were chosen. The selection of control charts gives statistically sound control limits that capture variation over time. The control limits give expected limits for natural variation, whereas statistically based rules make clear any special cause variation. This analytical methodology is strongly suited for both the prospective monitoring of healthcare improvement work as well as the subsequent reporting as a scientific paper. Depending on the type of intervention under scrutiny, complementary types of analyses may be used, including qualitative methods.
The MET analysis also uses a Pareto chart to analyse differences in characteristics between clinician-initiated versus family initiated MET activations. Finally, specific comparisons between subgroups, where time is not an essential variable, are augmented with traditional biostatistical approaches, such as χ2 testing. This example, with its one-paragraph description of analytical methods (control charts, Pareto charts and basic biostatistics) is easily understandable and clearly written so that it is accessible to front-line healthcare professionals who might wish to use similar techniques in their work.
Every analytical method also has constraints, and the reason for choosing each method should be explained by authors. The second example, by Timmerman et al,45 presents a more complex analysis of the data processes involved in a multicentre improvement collaborative. The authors provide a clear rationale for selecting each of their chosen approaches. Principles of healthcare improvement analytics are turned inwards to understand more deeply the strengths and weaknesses of the way in which primary data were obtained, rather than interpretation of the clinical data itself. In this example,45 rational subgrouping of participating sites is undertaken to understand how individual sites contribute to variation in the process and outcome measures of the collaborative. Control charts have inherent constraints, such as the requisite number of baseline data points needed to establish preliminary control limits. Recognising this, Timmerman, et al used linear regression to test for the statistical significance in the slopes of aggregate data, and used run charts for graphical representation of the data to enhance understanding.
Donabedian said, “Measurement in the classical sense—implying precision in quantification—cannot reasonably be expected for such a complex and abstract object as quality.”47 In contrast to the what, when and how much of quantitative, empirical approaches to data, qualitative analytical methods strive to illuminate the how and why of behaviour and decision making—be it of individuals or complex systems. In the third example, by Dainty et al, grounded theory is applied to improvement work wherein the data from structured interviews are used to gain insight into and generate hypotheses about the causative or moderating forces in multicentre quality improvement collaboratives, including how they contribute to actual improvement. Themes were elicited using multiple qualitative methods—including a structured interview process, audiotaping with independent transcription, comparison of analyses by multiple investigators, and recurrence frequencies of constructs.47
In all three example papers, the analytical methods selected are clearly described and appropriately cited, affording readers the ability to understand them in greater detail if desired. In the first two, SPC methods are employed in divergent ways that are instructive regarding the versatility of this analytical method. All three examples provide a level of detail which further supports replication.
Ethical considerations
Ethical aspects of implementing and studying the intervention(s) and how they were addressed, including, but not limited to, formal ethics review and potential conflict(s) of interest.
ExampleClose monitoring of [vital] signs increases the chance of early detection of patient deterioration, and when followed by prompt action has the potential to reduce mortality, morbidity, hospital length of stay and costs. Despite this, the frequency of vital signs monitoring in hospital often appears to be inadequate…Therefore we used our hospital's large vital signs database to study the pattern of the recording of vital signs observations throughout the day and examine its relationship with the monitoring frequency component of the clinical escalation protocol…The large study demonstrates that the pattern of recorded vital signs observations in the study hospital was not uniform across a 24 h period…[the study led to] identification of the failure of our staff in our study to follow a clinical vital signs monitoring protocol…
Acknowledgements The authors would like to acknowledge the cooperation of the nursing and medical staff in the study hospital.
Competing interests VitalPAC is a collaborative development of The Learning Clinic Ltd [TLC] and Portsmouth Hospitals NHS Trust [PHT]. PHT has a royalty agreement with TLC to pay for the use of PHT intellectual property within the VitalPAC product. Professor Prytherch and Drs Schmidt, Featherstone and Meredith are employed by PHT. Professor Smith was an employee of PHT until 31 March 2011. Dr Schmidt and the wives of Professors Smith and Prytherch are shareholders in TLC. Professors Smith and Prytherch and Dr Schmidt are unpaid research advisors to TLC. Professors Smith and Prytherch have received reimbursement of travel expenses from TLC for attending symposia in the UK.
Ethics approval Local research ethics committee approval was obtained for this study from the Isle of Wight, Portsmouth and South East Hampshire Research Ethics Committee [study ref. 08/02/1394].”48
Explanation
SQUIRE 2.0 provides guidance to authors of improvement activities in reporting on the ethical implications of their work. Those reading published improvement reports should be assured that potential ethics issues have been considered in the design, implementation and dissemination of the activity. The example given highlights key ethical issues that may be reported by authors, including whether or not independent review occurred, and any potential conflicts of interest.49–56 These issues are directly described in the quoted sections.
Expectations for the ethical review of research and improvement work vary between countries57 and may also vary between institutions. At some institutions, both quality improvement and human subject research are reviewed using the same mechanism. Other institutions designate separate review mechanisms for human subject research and quality improvement work.56 In the example above, from the UK, Hands et al48 report that the improvement activity described was reviewed and approved by a regional research ethics committee. In another example, from the USA, the authors of a report describing a hospital-wide improvement activity to increase the rate of influenza vaccinations indicate that their work was reviewed by the facility's quality management office.58
Avoiding potential conflict of interest is as important in improvement work as it is in research. The authors in the example paper indicate the presence or absence of potential conflicts of interests, under the heading, ‘Competing Interests.’ Here, the authors provide the reader with clear and detailed information concerning any potential conflict of information.
Both the original and SQUIRE 2.0 guidelines stipulate that reports of interventions to improve the safety, value or quality of healthcare should explicitly describe how potential ethical concerns were reviewed and addressed in development and implementation of the intervention. This is an essential step for ensuring the integrity of efforts to improve healthcare, and should therefore be explicitly described in published reports.
Results
Results: evolution of the intervention and details of process measures
Initial steps of the intervention(s) and their evolution over time (eg, timeline diagram, flow chart or table), including modifications made to the intervention during the project
Details of the process measures and outcome
ExampleOver the course of this initiative, 479 patient encounters that met criteria took place. TTA[Time to antibiotic] delivery was tracked, and the percentage of patients receiving antibiotics within 60 minutes of arrival increased from 63% to 99% after 8 months, exceeding our goal of 90% [figure1]… Control charts demonstrated that antibiotic administration was reliably, 1 hour by phase III and has been sustained for 24 months since our initiative goal was first met in June 2011.
Key improvement areas and specific interventions for the initiative are listed in [figure 2]. During phase I, the existing processes for identifying and managing febrile patients with central lines were mapped and analyzed. Key interventions that were tested and implemented included revision of the greeter role to include identification of patients with central lines presenting with fever and notification of the triage nurse, designation of chief complaint as “fever/central line,” re-education and re-emphasis of triage acuity as 2 for these patients, and routine stocking of the Pyxis machine ….
In phase II, strategies focused on improving performance by providing data and other information for learning, using a monthly newsletter, public sharing of aggregate compliance data tracking, individual reports of personal performance, personal coaching of noncompliant staff, and rewards for compliance…
In phase III, a management guideline with key decision elements was developed and implemented [figure 3]. A new patient identification and initial management process was designed based on the steps, weaknesses, and challenges identified in the existing process map developed in phase I. This process benefited from feedback from frontline ED staff and the results of multiple PDSA cycles during phases I and II….
During the sustainability phase, data continued to be collected and reported to monitor ongoing performance and detect any performance declines should they occur…18
Explanation
Healthcare improvement work is based on a rationale, or hypothesis, as to what intervention will have the desired outcome(s) in a given context. Over time, as a result of the interaction between interventions and context, these hypotheses are re-evaluated, resulting in modifications or changes to the interventions. Although the mechanism by which this occurs should be included in the methods section of a report, the resulting transformation of the intervention over time rightfully belongs under results. The results section should therefore describe both this evolution and its associated outcomes.
When publishing this work, it is important that the reader has specific information about the initial interventions and how they evolved. This can be in the form of tables and figures in addition to text. In the example above, interventions are described in phases: I, II, III and a sustainability phase, and information provided as to why they evolved and how various roles were impacted (figure 2). This level of detail allows readers to imagine how these interventions and staff roles might be adapted in the context of their own institutions, as an intervention which is successful in one organisation may not be in another.
It is important to report the degree of success achieved in implementing an intervention in order to assess its fidelity, for example, the proportion of the time that the intervention actually occurred as intended. In the example above, the goal of delivering antibiotics within an hour of arrival, a process measure, is expressed in terms of the percentage of total patients for whom it was achieved. The first chart (figure 3) shows the sustained improvement in this measure over time. The second chart (figure 4) illustrates the resulting decrease in variation as the interventions evolved and took hold. The charts are annotated to show the phases of evolution of the project, to enable readers to see where each intervention fits in relationship to project results over time.
Results: contextual elements and unexpected consequences
Contextual elements that interacted with the interventions
Observed associations between outcomes, interventions and relevant contextual factors
Unintended consequences such as benefits, harms, unexpected results, problems or failures associated with the intervention(s)
ExampleQuantitative results
In terms of QI efforts, two-thirds of the 76 practices [67%] focused on diabetes and the rest focused on asthma. Forty-two percent of practices were family medicine practices, 26% were pediatrics, and 13% were internal medicine. The median percent of patients covered by Medicaid and with no insurance was 20% and 4%, respectively. One-half of the practices were located in rural settings and one-half used electronic health records. For each diabetes or asthma measure, between 50% and 78% of practices showed improvement [ie, a positive trend] in the first year.
Tables 2 and 3 show the associations of leadership with clinical measures and with practice change scores for implementation of various tools, respectively. Leadership was significantly associated with only 1 clinical measure, the proportion of patients having nephropathy screening [OR=1.37: 95% CI 1.08 to 1.74]. Inclusion of practice engagement reduced these odds, but the association remained significant. The odds of making practice changes were greater for practices with higher leadership scores at any given time [ORs=1.92–6.78]. Inclusion of practice engagement, which was also significantly associated with making practice changes, reduced these odds [ORs=2.41 to 4.20], but the association remained significant for all changes except for registry implementation
Qualitative results
Among the 12 practices interviewed, 5 practices had 3 or fewer clinicians and 7 had 4 or more [range=1–32]. Seven practices had high ratings of practice change by the coach. One-half were NCQA [National Committee for Quality Assurance] certified as a patient-centered medical home. These practices were similar to the quantitative analysis sample except for higher rates of electronic health record use and Community Care of North Carolina Medicaid membership…
Leadership-related themes from the focus groups included having [1] someone with a vision about the importance of the work, [2] a middle manager who implemented the vision, and [3] a team who believed in and were engaged in the work.…Although the practice management provided the vision for change, patterns emerged among the practices that suggested leaders with a vision are a necessary, but not sufficient condition for successful implementation.
Leading from the middle
All practices had leaders who initiated the change, but practices with high and low practice change ratings reported very different ‘operational’ leaders. Operational leaders in practices with low practice change ratings were generally the same clinicians, practice managers, or both who introduced the change. In contrast, in practices with high practice change ratings, implementation was led by someone other than the lead physician or top manager..”59
Explanation
One of the challenges in reporting healthcare improvement studies is the effect of context on the success or failure of the intervention(s). The most commonly reported contextual elements that may interact with interventions are structural variables including organisational/practice type, volume, payer mix, electronic health record use and geographical location. Other contextual elements associated with healthcare improvement success include top management leadership, organisational structure, data infrastructure/information technology, physician involvement in activities, motivation to change and team leadership.60 In this example, the authors provided descriptive information about the structural elements of the individual practices, including type of practice, payer mix, geographical setting and use of electronic health records. The authors noted variability in improvement in diabetes and asthma measures across the practices, and examined how characteristics of practice leadership affected the change process for an initiative to improve diabetes and asthma care. Practice leadership was measured monthly by the community based practice coach at each site. For analyses, these scores were reduced into low (0–1) and high (2–3) groups. Practice change ratings were also assigned by the practice coaches indicating the degree of implementation and use of patient registries, care templates, protocols and patient self-management support tools. Local leadership showed no association with most of the clinical measures; however, local leadership involvement was significantly associated with implementation of the process tools used to improve outcomes. The authors use tables to display these associations clearly to the reader.
In addition, the authors use the information from the coaches’ ratings\to further explore this concept of practice leadership. The authors conducted semistructured focus group interviews for a sample of 12 of the 76 practices based on improvement in clinical measures and improvement in practice change score. Two focus groups were conducted in each practice including one with practice clinicians and administrators and one with front-line staff. Three themes emerged from these interviews that explicated the concept of practice leadership in these groups. While two of the themes reflect contextual elements that are often cited in the literature (visionary leader and engaged team), the authors addressed an unexpected theme about the role of the middle (operational) manager. This operational leader was often reported to be a nurse or nurse practitioner with daily interactions with physicians and staff, who appeared to be influential in facilitating change. The level of detail provided about the specifics of practice leadership can be useful to readers who are engaged in their own improvement work. Although no harms or failures related to the work were described, transparent reporting of negative results is as important as reporting successful ones.
In this example, the authors used a mixed methods approach in which practice leadership and engagement was quantitatively rated by improvement coaches as well as qualitatively evaluated using focus groups. The use of qualitative methods enhanced understanding of the context of practice leadership. This mixed methods approach is not a requirement for healthcare improvement studies as the influence of contextual elements can be assessed in many ways. For example, Cohen et al simply describe the probable impact of the 2009 H1N1 pandemic on their work to increase influenza vaccination rates in hospitalised patients,58 providing important contextual information to assist the reader's understanding of the results.
Results: missing data
Details about missing data
Example 1We successfully contacted 69% [122/178] of patients/families in the postimplementation group…Among the remaining 56 patients [31%] for whom no phone or E-mail follow-up was obtained, 34 had another encounter in our hospital on serial reviews of their medical record. Nine patients were evaluated in a cardiology clinic and 7 in a neurology clinic. As a result of these encounters, there were no patients ultimately diagnosed with a cardiac or neurologic condition.61
Example 2We identified 328 patients as under-immunized between September 2009 and September 2010. We fully immunized 194 [59%] of these patients by September 2010…We failed to recover missing outside immunization records on 15 patients [5%]. The remaining 99 patients [30%] refused vaccines, transferred care, or were unreachable by phone or mail. For the 194 patients we fully immunized, we made 504 [mean 2.6] total outreach attempts for care coordination. We immunized 176 [91%] of these patients by age 24 months. For the 20 patients who remained under-immunized, we made 113 [mean 5.7] total outreach attempts for care coordination. We continued attempting outreach to immunize these patients even after their second birthday.62
Explanation
Whenever possible, the results section of a healthcare improvement paper should account for missing data. Doing so enables the reader to understand potential biases in the analysis, and may add important context to the study findings. It is important for authors to clearly state why data are missing, such as technical problems or errors in data entry, attrition of participants from an improvement initiative over time, or patients lost to follow-up. Efforts made by the team to recover the data should be described, and any available details about the missing data provided.
In the first example,61 the improvement team was unable to contact 56 patients for phone or email follow-up (ie, why the data are missing). To account for this missing data, the team performed serial reviews of medical records. In doing so, they were able to report patient information relevant to the study outcomes. In the second example,62 the authors also clearly state the reasons for missing data (failure to recover outside records, transfers of care, unreachable by phone or email). In addition, they give details about the number of outreach attempts made for specific patient groups. Providing a detailed description of missing data allows for a more accurate interpretation of study findings.
Discussion
Summary
Key findings, including relevance to the rationale and specific aims
Particular strengths of the project
ExampleIn our 6-year experience with family-activated METs[Medical Emergency Teams], families uncommonly activated METs. In the most recent and highest-volume year, families called 2.3 times per month on average. As a way of comparison, the hospital had an average of 8.7 accidental code team activations per month over this time. This required an urgent response from the larger team. Family activation less commonly resulted in ICU transfer than clinician activated METs, although 24% of calls did result in transfers. This represents a subset of deteriorating patients that the clinical team may have missed. In both family-activated and clinician-activated MET calls, clinical deterioration was a common cause of MET calls. Families more consistently identified their fear that the child's safety was at risk, a lack of response from the clinical team, and that the interaction between team and family had become dismissive. To our knowledge, this study is the largest study of family-activated METs to date, both in terms of count of calls and length of time observed. It is also the first to compare reasons for MET calls from families with matched clinician-activated calls.44
Explanation
Although often not called out with a specific subheading, the ‘summary’ of a report on healthcare improvement most often introduces and frames the ‘discussion’ section. While the first paragraph should be a focused summary of the most critical findings, the majority of the project's results should be contained in the results section. The goal of the summary is to capture the major findings and bridge the discussion to a more nuanced exploration of those findings. Exactly where the summary section ends is far less important that how it sets up the reader to explore and reflect on the ensuing discussion.
The example above gives a clear and concise statement of the study's strengths and distinctive features. This summary recaps quantitative findings (families called METs relatively infrequently and fewer of their calls resulted in intensive care unit (ICU) transfers), and introduces a subsequent discussion of concerns identified by families which might not be visible to clinicians, including ways in which ‘family activation of an MET may improve care without reducing MET-preventable codes outside of the ICU’.44 This conveys an important message and bridges to a discussion of the existing literature and terminology. Providing a focused summary in place of an exhaustive re-statement of project results appropriately introduces the reader to the discussion section and a more thorough description of the study's findings and implications.
The authors go on to relate these main findings back to the nature and significance of the problem and the specific aims previously outlined in the introduction section, specifically (emphasis added) ‘To evaluate the burden of family activation on the clinicians involved\…too better understand the outcome of METs, and to begin to understand why families call METs’.44
Another approach in structuring the summary component of the discussion is to succinctly link results to the relevant processes in the development of the associated interventions. This approach is illustrated by Beckett et al in a recent paper about decreasing cardiac arrests in the acute hospital setting,63 “Key to this success has been the development of a structured response to the deteriorating patient. Following the implementation of reliable EWS [early warning systems] across the AAU[Acute Admissions Unit] and ED [Emergency Department], and the recognition and response checklists, plus weekly safety meetings in the AAU at SRI[Stirling Royal Infirmary], there was an immediate fall in the number of cardiac arrests, which was sustained thereafter.”63 This linkage serves to reintroduce the reader to some of the relevant contextual elements which can subsequently be discussed in more detail as appropriate. Importantly, it also serves to frame the interpretive section of the discussion which focuses on comparison of results with findings from other publications, and further evaluating the project's impact.
Interpretation
Nature of the association between the intervention(s) and the outcomes
Comparison of results with findings from other publications
Impact of the project on people and systems
Reasons for any differences between observed and anticipated outcomes, including the influence of context
Costs and strategic trade-offs, including opportunity costs
Example 1(a) After QI interventions, the percentage of patients attending four or more clinic visits significantly improved, and in 2012 we met our goal of 90% of patients attending four or more times a year. A systematic approach to scheduling processes, timely rescheduling of patients who missed appointments and monitoring of attendance resulted in a significant increase in the number of patients who met the CFF national recommendation of four or more visits per year.64
(b) Although the increase in the percentage of patients with greater than 25th centile for BMI/W-L from 80% to 82% might seem small, it represents a positive impact on a few more patients and provides more opportunities for improvement. Our data are in agreement with Johnson et al. (2003), who reported that frequent monitoring among other interventions made possible due to patients being seen more in clinic was associated with improved outcomes in CF.64
(c) We learned that families are eager to have input and be involved…participation in the[learning and leadership collaborative] resulted in a positive culture change at the ACH CF Care Center regarding the use of QI methods.64
(d) We noticed our clinic attendance started to improve before the[intervention] processes were fully implemented. We speculate this was due to the heightened awareness of our efforts by patients, families and our CF team.64
(e) Replication of these processes could be hindered by lack of personnel, lack of buy-in by the hospital administration and lack of patient/family involvement….barriers to attendance included rising fuel costs, transportation limitations, child care issues, missed workdays by caregivers and average low-income population.64
Example 2The direct involvement of patients and families…allowed us to address the social and medical barriers to adherence. Their input was invaluable since they live with the treatment burden that is a daily part of CF care…the in-clinic patient demonstration gave staff the ability to upgrade or replace equipment that was not functioning.65
We found that following a simple algorithm helped to maintain consistency in our program…the simplicity of this program makes it easily incorporated into routine CF clinic visits.65
Explanation
In the first example, Berlinski, et al64 describe the implications of their improvement efforts by highlighting that they increased the proportion of CF patients receiving four clinic visits a year and achieved secondary improvements on a nutritional outcome and on the culture of their context. The authors also offer alternative explanations for outcomes, including factors which might have confounded the asserted relationship between intervention and outcome— namely that performance on the primary outcome began to improve well before implementation of the intervention. This provides insight into what the actual drivers of the outcome might have been, and can be very helpful to others seeking to replicate or modify the intervention. Finally, their comparison of their results to that of a similar study provides a basis for considerations of feasibility, sustainability, spread and replication of the intervention.
The second example, from Zanni, et al65 found that the simplicity of their intervention could maximise ease of implementation, suggesting that costs and trade-offs are likely to be minimal for replication in similar contexts. Conversely, Berlinksi et al64 cite barriers to replicating and sustaining their work, including staffing, leadership, population socioeconomic characteristics and informatics issues, each of which could present cost or trade-off considerations that leadership will need to consider to support implementation and sustainability. Additionally, both Berlinski et al and Zanni, et al observe that patient and family involvement in the planning and intervention process simultaneously improved the context and effectiveness of the intervention.
Limitations
Limits to the generalisability of the work
Factors that might have limited internal validity such as confounding, bias, or imprecision in the design, methods, measurement, or analysis
Efforts made to minimise and adjust for limitations
Example 1Our study had several limitations. Our study of family MET activations compared performance with our historical controls, and we were unable to adjust for secular trends or unmeasured confounders. Our improvement team included leaders of our MET committee and patient safety, and we are not aware of any ongoing improvement work or systems change that might have affected family MET calls. We performed our interventions in a large tertiary care children's hospital with a history of improvement in patient safety and patient-centred and family-centred care.
Additionally, it is uncertain and likely very context-dependent as to what is the ‘correct’ level of family-activated METs. This may limit generalizability to other centres, although the consistently low rate of family MET calls in the literature in a variety of contexts should reduce concerns related to responding team workload. We do not have process measures of how often MET education occurred for families and of how often families understood this information or felt empowered to call. This results in a limited understanding of the next best steps to improve family calling. Our data were collected in the course of clinical care with chart abstraction from structured clinical notes. Given this, it is possible that notes were not written for family MET calls that were judged ‘nonclinical.’ From our knowledge of the MET system, we are confident such calls are quite few, but we lack the data to quantify this. Our chart review for the reasons families called did not use a validated classification tool as we do not believe one exists. This is somewhat mitigated by our double independent reviews that demonstrated the reliability of our classification scheme.44
Example 2Our study has a number of important limitations. Our ethnographic visits to units were not longitudinal, but rather snapshots in time; changes in response to the program could have occurred after our visits. We did not conduct a systematic audit of culture and practices, and thus some inaccuracies in our assessments may be present. We did not evaluate possible modifiers of effect of factors such as size of unit, number of consultants and nurses, and other environmental features. We had access to ICUs’ reported infection rates only if they provided them directly to us; for information governance reasons, these rates could not be verified. It is possible that we have offered too pessimistic an interpretation of whether Matching Michigan ‘worked’: the quantitative evaluation may have underestimated the effects of the program [or over-estimated the secular trend], since the ‘waiting’ clusters were not true controls that were unexposed to the interventions. …66
Explanation
The limitations section offers an opportunity to present potential weaknesses of the study, explain the choice of methods, measures and intervention, and examine why results may not be generalisable beyond the context in which the work occurred. In the first example, a study of family activated METs, Brady et al identified a number of issues that might influence internal validity and the extent to which their findings are generalisable to other hospitals. The success of METs, and the participation of family members in calling these teams, may depend on contextual attributes such as leadership involvement. Although few hospitals have implemented family activated METs, the growing interest in patient and family engagement may also contribute to a broader use of this intervention. There are no data available to assess the secular trends in these practices that might suggest the observed changes resulted from external factors.
There were few family activated MET calls. This positive result may stem from family education, but the authors report that they had limited data on such education. The lack of a validated tool to capture chart review information is noted as a potential weakness since some non-clinical MET calls might not have been recorded in the chart. The authors also note that the observed levels of family activated MET calls are consistent with other literature.
The impact of improvement interventions often varies with context, but the large number of potential factors to consider requires that researchers focus on a limited set of contextual measures they believe may influence success and future adaptation and spread. In the second example given, Dixon-Woods et al assessed variation in results of the implementation of the central line bundle to reduce catheter-related bloodstream infections in English ICUs.66 While English units made improvements, the results were not as impressive as in the earlier US experience. The researchers point to the prior experiences of staff in the English ICUs in several infection control campaigns, as contributing to this difference. Many English clinicians viewed the new programme as redundant, believing this was a problem already solved. The research team also notes that some of the English ICUs did not have an organisational culture that supported consistent implementation of the required changes.
Dixon-Woods et al relied on quantitative data on clinical outcomes as well as observation and qualitative interviews with staff. However, as they report, their study had several limitations. Their visits to the units were not longitudinal, so changes could have been made in some units after the researchers' observations. They did not carry out systematic audits of culture and practices that might have revealed additional information, nor did they assess the impact of local factors including the size of the unit, the number of doctors and nurses, and other factors that might have affected the capability of the unit to implement new practices. Moreover, while the study included controls, there was considerable public and professional interest in these issues, which may have influenced performance and reduced the relative impact of the intervention. The authors' report66 of the context and limitations is crucial to assist the reader in assessing their results, and in identifying factors that might influence results of similar interventions elsewhere.
Conclusions
Usefulness of the work
Sustainability
Potential for spread to other contexts
Implications for practice and for further study in the field
Suggested next steps
ExampleWe have found that average paediatric nurse staffing ratios are significantly associated with hospital readmission for children with common medical and surgical conditions. To our knowledge, this study is the first to explicitly examine and find an association between staffing ratios and hospital readmission in paediatrics… Our findings have implications for hospital administrators given the national emphasis on reduction of readmissions by payers. The role of nursing care in reducing readmissions has traditionally focused on nurse-led discharge planning programmes in the inpatient setting and nurse-directed home care for patients with complex or chronic conditions.
While these nurse-oriented interventions have been shown to significantly reduce readmissions, our findings suggest that hospitals might also achieve reductions in readmission by focusing on the number of patients assigned to nurses. In paediatrics, limiting nurses’ workloads to four or fewer patients appears to have benefits in reducing readmissions.
Further, hospitals are earnestly examining their discharge processes and implementing quality improvement programmes aimed at preparing patients and families to manage health condition(s) beyond the hospital. Quality improvement initiatives to improve inpatient care delivery often depend upon the sustained efforts of front-line workers, particularly nurses. Prior research shows that hospitals with better nurse staffing ratios deliver Joint Commission-recommended care for key conditions more reliably, highlighting the inter-relationship of nurse staffing levels and quality improvement success.
The sustainability of quality improvement initiatives related to paediatric readmission may ultimately depend on nurses' ability to direct meaningful time and attention to such efforts.67
Explanation
The conclusion of a healthcare improvement paper should address the overall usefulness of the work, including its potential for dissemination and implications for the field, both in terms of practice and policy. It may be included as a separate section in or after the discussion section, or these components may be incorporated within a single overall discussion section.
The authors of this report highlight the usefulness of their research with reference to ‘the national[US] emphasis on reduction of readmission by payers’. They also refer throughout the paper to the debates and research around appropriate nurse staffing levels, and the impact of nurse staffing levels on the sustainability of quality improvement initiatives in general, with reference to the key role of nurses in improving care and evidence that nursing staff levels are associated with delivery of high quality care. Although the authors don't refer directly to potential for spread to other contexts, the generalisability of their findings is discussed in a separate section of the discussion (not included here).
In this example, the authors refer to ‘implications for hospital administrators’ because their findings ‘suggest that hospitals might also achieve reduction in readmissions by focusing on the number of patients assigned to nurses’. They also observe that these findings speak to ‘the validity of the California minimum staffing ratio for paediatric care’. Perhaps they could have suggested more in terms of implications for policy, for example what their findings might mean for the potential of payer organisations to influence nurse staffing levels through their contracts, or for broader government legislation on nurse-patient ratios. However, in their discussion they also recognise the limitations of a single study to inform policy decisions.
The need for further study is emphasised in the wider discussion section. The authors note that ‘more research is needed to better understand the reasons for children's readmissions and thus identify which ones are potentially preventable’, calling for ‘additional research on both paediatric readmission measures and the relationship between nursing care delivery, nurse staffing levels and readmissions’. In writing about healthcare improvement, it is important that the authors’ conclusions are appropriately related to their findings, reflecting their validity and generalisability, and their potential to inform practice. In this case, direct recommendations to change practice are appropriately withheld given the need for further research.
Funding
Sources of funding that supported this work. Role, if any, of the funding organisation in the design, implementation, interpretation and reporting.
ExampleFunding/Support: This research was funded by the Canadian Institutes of Health Research, the Ontario Ministry of Health and Long-Term Care, the Green Shield Canada Foundation, the University of Toronto Department of Medicine, and the Academic Funding Plan Innovation Fund.
Role of the Funder/Sponsor: None of the funder or sponsors had any role in the design of the study, the conduct of the study, the collection, management, analysis or interpretation of the data, or the preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.68
Explanation
Sources of funding for quality improvement should be clearly stated at the end of a manuscript in similar fashion to other scholarly reports. Any organisation, institution or agency that contributed financially to any part of the project should be listed. In this example, funding was received from multiple sources including government, university and foundation granting agencies.
Due to their financial interest in the quality improvement project, funding sources have the potential to introduce bias in favour of exaggerated effect size. The role of each funding source should be described in sufficient detail, as in the example above, to allow readers to assess whether these external parties may have influenced the reporting of improvement outcomes. A recent paper by Trautner et al provides a similar approach.69
Summary and conclusions regarding this E&E
The SQUIRE 2.0 E&E is intended to help authors ‘operationalise’ SQUIRE in their reports of systematic efforts to improve the quality, safety and value of healthcare. Given the rapid growth in healthcare improvement over the past two decades, it is imperative to promote the sharing of successes and failures to inform further development of the field. The E&E provides guidance about how to use SQUIRE as a structure for writing and can be a starting point for ongoing dialogue about key concepts that are addressed in the guidelines. We hope that SQUIRE 2.0 will challenge authors to write better and to think more clearly about the role of formal and informal theory, interaction between context, interventions, and outcomes, and methods for studying improvement work. Due to space considerations, we have been able to cite a few of many possible examples from the literature for each guideline section. To further explore these key concepts in healthcare improvement, we recommend both the complete articles cited by the authors of this E&E as well as their secondary references. To promote the spread and sustainability of SQUIRE 2.0, the Guidelines, this E&E and the accompanying glossary are accessible on the SQUIRE website (http://www.squire-statement.org). The website also links the viewer to resources such as screencasts and opportunities to discuss key concepts through an interactive forum.
Since the publication of SQUIRE 1.01 in 2008, there has been an enormous increase in the number and complexity of published reports about healthcare improvement. We hope that the time spent in the evaluation and careful development of SQUIRE 2.0 and this E&E will contribute to a new chapter in scholarly writing about healthcare improvement. We look forward to the continued growth of the field and the further evolution of SQUIRE as we deepen our understanding of how to improve the quality, safety and value of healthcare.
References
Footnotes
Correction notice This article has been updated since it published Online First. The names of two authors have been revised.
Twitter Follow Leora Horwitz at @leorahorwitzmd and Johan Thor at @johanthor1
Collaborators Frank Davidoff, MD Editor Emeritus, Annals of Internal Medicine, and Adjunct Professor at The Dartmouth Institute for Health Policy and Clinical Practice, Geisel School of Medicine at Dartmouth, Hanover, New Hampshire, USA, fdavidoff@cox.net; Paul Batalden, MD Active Emeritus Professor, Pediatrics and Community and Family Medicine, Geisel School of Medicine at Dartmouth, The Dartmouth Institute for Health Policy and Clinical Practice Hanover, New Hampshire, USA, paul.batalden@gmail.com; David Stevens, MD Adjunct Professor, The Dartmouth Institute for Health Policy and Clinical Practice, Hanover, NH, USA, Editor Emeritus, BMJ Quality and Safety, London, UK Senior Fellow, Institute for Healthcare Improvement, Cambridge, MA, USA, david.p.stevens@dartmouth.edu. Dr. Davidoff contributed substantially to the editing of the paper but did not serve as an author. Drs. Davidoff, Stevens and Batalden all contributed substantially to the final version of the SQUIRE 2.0 Guidelines, which provides the framework for this manuscript, and offered comments and guidance during the writing process.
Contributors All listed authors contributed substantially to the writing of the manuscript. Each author was assigned a specific section, and was the primary author for that section, with one exception: TCF and JCM coauthored the section entitled ‘Summary.’ Each author also had an opportunity to review the final manuscript prior to its submission. The corresponding author and guarantor of this project, DG, was responsible for coordinating submissions and for the primary structuring and editing of this manuscript. GO and LD contributed substantially to both editing and to the conceptualization of the manuscript, and to the purpose and structure of included sections. LD also authored the section entitled ‘Study of the Intervention.’
Funding The revision of the SQUIRE guidelines was supported by funding from the Robert Wood Johnson Foundation and the Health Foundation. Robert Wood Johnson Foundation (grant number 70024) Health Foundation (grant number 7099).
Competing interests None declared.
Provenance and peer review Not commissioned; internally peer reviewed.
Linked Articles
- Research and reporting methodology