Article Text

Download PDFPDF

The SQUIRE Guidelines: an evaluation from the field, 5 years post release
  1. Louise Davies1,2,3,
  2. Paul Batalden3,
  3. Frank Davidoff3,
  4. David Stevens3,
  5. Greg Ogrinc4,5
  1. 1VA Outcomes Group, Department of Veterans Affairs Medical Center, White River Junction, VT
  2. 2Department of Surgery - Otolaryngology, Geisel School of Medicine at Dartmouth, Hanover, NH
  3. 3The Dartmouth Institute for Health Policy & Clinical Practice, Lebanon, NH
  4. 4The Geisel School of Medicine at Dartmouth, Hanover, NH, USA
  5. 5Department of Veterans Affairs Medical Center, White River Junction, VT, USA
  1. Correspondence to Dr Louise Davies, VA Outcomes Group –111B, Department of Veterans Affairs Medical Center, 215 North Main Street, White River Junction, VT 05009, USA; Louise.Davies{at}dartmouth.edu

Abstract

Background The Standards for Quality Improvement Reporting Excellence (SQUIRE) Guidelines were published in 2008 to increase the completeness, precision and accuracy of published reports of systematic efforts to improve the quality, value and safety of healthcare. Since that time, the field has expanded. We asked people from the field to evaluate the Guidelines, a novel approach to a first step in revision.

Methods Evaluative design using focus groups and semi-structured interviews with 29 end users and an advisory group of 18 thinkers in the field. Sampling of end users was purposive to achieve variation in work setting, geographic location, area of expertise, manuscript writing experience, healthcare improvement and research experience.

Results Study participants reported that SQUIRE was useful in planning a healthcare improvement project, but not as helpful during writing because of redundancies, uncertainty about what was important to include and lack of clarity in items. The concept "planning the study of the intervention" (item 10) was hard for many participants to understand. Participants varied in their interpretation of the meaning of item 10b "the concept of the mechanism by which changes were expected to occur". Participants disagreed about whether iterations of an intervention should be reported. Level of experience in writing, knowledge of the science of improvement and the evolving meaning of some terms in the field are hypothesised as the reasons for these findings.

Conclusions The original SQUIRE Guidelines help with planning healthcare improvement work, but are perceived as complicated and unclear during writing. Key goals of the revision will be to clarify items where conflict was identified and outline the key components necessary for complete reporting of improvement work.

  • Quality improvement
  • Qualitative research
  • Healthcare quality improvement

This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

In 2008, the Standards for Quality Improvement Reporting Excellence (SQUIRE) Guidelines were published, designed to support the scholarly publication of healthcare improvement work. The Guidelines resulted from a nearly 3-year development period. The work included a statement of purpose, followed by a working draft, a modified Delphi process to identify key content items and consensus meetings to reach a final version.1 ,2 The goal of SQUIRE was to increase the breadth and frequency of published reports; improve the utility of the reports by enhancing transparency, comprehensiveness and rigour; and encourage reflection on the epistemology and emerging science of improvement.3

In 2012, we began a revision of the SQUIRE Guidelines, anticipating the creation of a new version, to be published in 2015. Why was this necessary so soon after their initial release? SQUIRE is guiding a field that is growing and changing—the science of healthcare improvement is dynamic: its methods and boundaries are still under development.4–7 The field is still determining to what degree the work should be considered research,8 and the definition of the science has not reached consensus.9 ,10

Because of this environment, we decided the SQUIRE Guidelines revision process should explicitly include efforts to understand the issues in the field as experienced by those working in improvement. While guidance for the developers of reporting guidelines suggests consumers might be involved as members of the development team,11 asking consumers to evaluate existing guidelines, to our knowledge, has not been done before.

We report here the findings from an evaluation performed to obtain people's views on SQUIRE as a tool for writing scholarly healthcare improvement manuscripts. Our research question was, “What are people's experiences with and impressions of the SQUIRE Guidelines?” These findings will support the development of SQUIRE 2.0—the updated set of guidelines that will be released in late 2015.

Methods

The evaluative design had three objectives: determine how people interpret and apply the items of SQUIRE; outline barriers to writing about healthcare improvement work; and identify key emerging issues for those doing scholarly healthcare improvement writing. We sought input from both end users of the guidelines and an advisory group of thinkers in the field of healthcare improvement.

End user evaluations of SQUIRE were obtained through four 1½ h focus groups (one in person and three via interactive two-way video mechanisms) and nine 1 h semi-structured interviews (all by telephone) performed between October 2012 and June 2013. A topic guide was used to conduct the first part of the interview or focus group, and then the participants were offered the opportunity to comment on the Guidelines item by item. Interviews and focus groups were digitally recorded and professionally transcribed. All interviews and focus groups were completed by a researcher with extensive experience in these methods (LD). The researcher was knowledgeable about healthcare improvement but uninvolved in the development or dissemination of the SQUIRE Guidelines prior to beginning the research, and she introduced herself as such at the start of each interview and focus group. For all but two of the interactions, this was the only person present with the participants; the other authors (all of whom had been part of the development of the Guidelines) were not present. The two focus groups conducted in Sweden were conducted in the presence of two people who had been involved in the Guideline development to manage technical issues (PB) and allow for immediate iterations if necessary of the interview guide (GO). Prior to completing the interviews, the main interviewer (LD) had no personal relationship with any of the interviewees.

Input from thinkers in the field was obtained from an advisory group of 18 people. Thinkers were defined as people who had contributed to healthcare improvement by developing the field through action or writing, disseminating key ideas and/or teaching extensively about healthcare improvement. Participants in this group were identified based on prior involvement in the SQUIRE Guideline development or publications related to the methods and science of healthcare improvement. The advisory group contributed in three ways in the style of participatory action research—in which participants both help direct and are the subject of the research.12 First, they provided an approach for us to triangulate emerging findings from end users—triangulating is the process of checking veracity of emerging findings by seeing if they are present in other data sources.13 Second, they contributed data about existing and emerging issues around SQUIRE item definitions and interpretation; third, they helped with data interpretation and the process of translating the interpretation into changes in the SQUIRE Guidelines.

The sampling strategy for composing the end users group was purposive,14 employing a ‘maximal variation’ sampling technique.14 Maximal variation sampling means participants are chosen across a variety of characteristics, rather than just one or two. We sought variety in (1) work settings—for example, both frontline care providers and improvement consultants; (2) healthcare specialties—that is, medical specialties as well as different professions; (3) training—for example, social sciences, administration and healthcare; (4) experience in writing and in healthcare improvement work; and (5) geographic location. Sampling for the advisory group focused on achieving balance by gender, profession and training. Nineteen individuals were invited to achieve a group of 18.

Invitations to end users were given by email, describing the request for an interview or participation in a focus group as a research project. Candidates for interviews and focus groups were drawn from lists of alumni and faculty of improvement fellowships, programmes and organisations, and lists of healthcare improvement conference attendees. To locate authors who were outside the range of people already on the lists above, we completed a Google Scholar search of people citing SQUIRE in their improvement publication. Sampling was considered complete when thematic saturation was reached—no new ideas or concepts were emerging.13

Interviews and focus groups proceeded in small waves, with an interim analysis between each wave. Triangulation of the data from end users was completed by asking later interviewees and members of the advisory group to confirm or disconfirm key statements from prior interviewees. We analysed the data using a grounded theory approach13 through close reading of transcripts and written comments, followed by initial coding of data into major categories. Finally, through a process of data reduction, important findings were collapsed into major themes. Interim analyses of emerging findings were presented monthly to study co-investigators ([redacted]) and twice to the advisory group described above. The goal of the presentations was to triangulate emerging findings, identify further areas for investigation in subsequent data collection from end users, interpret the data and reach consensus when there were conflicts in coding or analysis.

The COREQ Guidelines were used to guide the reporting of this work.15

Results

Twenty-nine end users of SQUIRE participated in focus groups (n=20) and semi-structured interviews (n=9). Participants were located in the USA, Canada, Sweden, the UK and Norway. Forty-two people were approached to achieve this sample size. Work settings represented included government, private business, academic and community. Experience with healthcare improvement ranged from self-taught authors, to current students in fellowship programmes, to faculty supporting healthcare improvement work—themselves with varied levels of experience in healthcare improvement. Nurses, physicians, physical therapists, administrators and doctorally prepared social scientists participated. Medical specialties represented were internal medicine, obstetrics and gynaecology, rheumatology, neonatology, paediatrics, infectious disease, psychiatry, anaesthesia and critical care. Of the 29 end users, 11 were not native English speakers. Major findings about how SQUIRE has been experienced by users fell into three major themes, described below.

Arranging SQUIRE items into the traditional framework of a scientific manuscript is challenging for end users

The SQUIRE Guidelines were uniformly praised as a document that was very useful for planning a healthcare improvement project, but harder to use in the task of writing about the work. Said participant 12, a PhD social scientist and healthcare improvement consultant: “We use SQUIRE a lot for planning—we complete the sections up through the methods at the time we design the study…[but] SQUIRE creates sort of long reports if followed exactly.” Participant 21, a graduate of a 2-year fellowship in healthcare improvement, said, “The Guidelines tell me everything [to think about], but they don't tell me what is important to include [in my manuscript]…there is no hierarchy…”

Guideline items 5, 6, 9–11 and 13–19 elicited substantial reactions that were informative to the revision process (table 1). Common concerns included requests for seemingly similar information in both the methods and results section, a sense that some items were in the wrong place, and a lack of clarity about what information was being requested in certain items. In item 10, the concept of reporting the ‘study of the intervention’ was simply not comprehensible to several participants. These participants suggested tools or visuals were needed to distinguish between ‘the work of the intervention’ and ‘the study of the intervention’ components to a project. Last, some felt the Guidelines seemed to presume a linearity to improvement work that might not be present.

Table 1

Selected Standards for Quality Improvement Reporting Excellence (SQUIRE) Guideline items that elicited specific comments from end users (quotes from focus groups and interviews that are illustrative of specific areas of concern in the document)

Among end users, usability perceptions of the SQUIRE Guidelines varies with experience in scholarly medical writing

Participants with less experience in scholarly medical writing found the SQUIRE Guidelines harder to understand and apply than those with more such experience. For example, participant 8, a PhD social scientist new to improvement work reporting, gave an initial impression of the Guidelines as follows: “Everything [in the SQUIRE checklist] is in such small pieces. You do not get the whole picture of what you are supposed to be doing.” She further explained that it felt hard to understand the whole of the task of writing about a healthcare improvement project because the checklist approach did not help her understand which parts of the work should be reported where or in what order. This observation was echoed by many others in the sample with a similar background in non-medically oriented scholarly writing.

Among those with more experience in scholarly medical writing but new to scholarly writing about healthcare improvement work, the SQUIRE Guidelines made more sense, and they identified similarities to the experience of using other publication guidelines. Participant 9, a nurse researcher with experience in both mixed methods research and clinical trials, noted that she found the Guidelines generally useful and clear, but had to draw on her mixed methods background to develop ways to teach the concept of reporting ‘the work’ as well as the ‘study of the work’ of healthcare improvement: “The doctors [I was advising]—they only knew statistics and quantitative work—they were only familiar with ‘context free’ research. I explained ‘[reporting QI work]’ is like writing up trial results and then also the experience of running that trial”.

Among participants who had experience with scholarly writing about healthcare improvement, or who had previous experience with scientific medical writing in general, the SQUIRE Guidelines were perceived as easier to use. Said participant 13, a physician researcher: “Much of the SQUIRE guidelines is commonsense, it is a transplantation of research methods into QI, making sure your QI project is written up as rigorously as traditional research”. Participant 17, physician graduate of an improvement fellowship, reflected on his experience since first using the SQUIRE Guidelines several years prior. He confirmed what others in his focus group noted about the challenges of using Guidelines and the impossibility of trying to include every item, saying “I whined to my advisor [when I was a fellow] about that very problem—[the document you create if you use SQUIRE exactly as written is unintelligible]. But the problem is I used it to write my very first paper. SQUIRE seems very simple to me now”. He had realised with experience and re-reading the Guidelines that SQUIRE did not require him to include every item in the manuscript, and that part of the challenge he had faced as a fellow was the work of being new to scientific writing in general.

Items that touched on evolving areas in healthcare improvement were interpreted differently across both end users and the advisory group of thinkers

The evolution of the healthcare improvement scholarly literature in the intervening years since the publication of the SQUIRE Guidelines has led to the development of concepts that were not fully anticipated at the time of initial release. Items 10b and 13aii–iv in particular revealed the areas where these changes are occurring. The specific items are shown in their entirety in the box 1.

Box 1

Standards for Quality Improvement Reporting Excellence (SQUIRE) items that were interpreted differently across end users and the advisory group of thinkers.

Item 10. Planning the study of the intervention

  1. Outlines plans for assessing how well the intervention was implemented (dose or intensity of exposure)

  2. Describes mechanisms by which intervention components were expected to cause changes, and plans for testing whether those mechanisms were effective

  3. Identifies the study design (eg, observational, quasi-experimental, experimental) chosen for measuring impact of the intervention on primary and secondary outcomes, if applicable

  4. Explains plans for implementing essential aspects of the chosen study design, as described in publication guidelines for specific designs, if applicable (see, eg, http://www.equator-network.org)

  5. Describes aspects of the study design that specifically concerned internal validity (integrity of the data) and external validity (generalisability)

Item 13. Outcomes

  1. Nature of setting and improvement intervention

    1. Characterises relevant elements of setting or settings (eg, geography, physical resources, organisational culture, history of change efforts), and structures and patterns of care (eg, staffing, leadership) that provided context for the intervention

    2. Explains the actual course of the intervention (eg, sequence of steps, events or phases; type and number of participants at key points), preferably using a timeline diagram or flow chart

    3. Documents degree of success in implementing intervention components

    4. Describes how and why the initial plan evolved, and the most important lessons learned from that evolution, particularly the effects of internal feedback from tests of change (reflexiveness)

  2. Changes in processes of care and patient outcomes associated with the intervention

    1. Presents data on changes observed in the care delivery process

    2. Presents data on changes observed in measures of patient outcome (eg, morbidity, mortality, function, patient/staff satisfaction, service use, cost, care disparities)

    3. Considers benefits, harms, unexpected results, problems, failures

    4. Presents evidence regarding the strength of association between observed changes/improvements and intervention components/context factors

    5. Includes summary of missing data for intervention and outcomes

For full guidelines, see http://www.squire-statement.org/assets/pdfs/SQUIRE_guidelines_table.pdf

Item 10b states, “Describes mechanisms by which intervention components were expected to cause changes, and plans for testing whether those mechanisms were effective”. In the advisory group of thinkers, this item was interpreted to mean ‘the theory’ of an intervention. The word ‘theory’, however, meant different things to different people. For some, the word ‘theory’ meant ‘mechanism by which an intervention was expected to work’, for others it meant ‘lean or six sigma for example’, and for still others it meant ‘logic model’. These responses pointed to different aspects of study design and methods that would require clarification in the revised Guidelines.

Items 13aii–iv are “Explains the actual course of the intervention…”, “Documents degree of success in implementing…” and “Describes how and why the initial plan evolved…”. On a mechanical level, participants disagreed about whether these items belonged in the results section or the methods section. On a more global level, participants disagreed about whether one should include iterations and development of a project at all in a manuscript, even though the items called for their inclusion. The confusion about the interpretation of these items was shown well with one participant, who said: “…the important things you can't really write about! …for me what was really important was how I thought about what the project was…and how it failed because [what I proposed] wasn't relevant to the people who were supposed to do the intervention. How do you write about that?” (participant 15, physician graduate of improvement fellowship). Some felt that failed iterations of the work, which might be included as part of item 13aiv, should be presented because it could help others to learn: “…is the failure [of the intervention] unique? If it is not generalizable or useful, then it should not be included. If it informs the results, it should be included [in the paper]” (participant 46, head of an institutional improvement programme). Others felt that journal editors would likely not want this information in a manuscript saying “That experience [of getting buy-in from the participants and developing the intervention] may well have usefully formed a paper in itself, but may not be of interest to the sorts of journals, and the caliber of journal that I would publish the results in…” (participant 14, physician author). Still others felt it was incorrect to include anything except the most successful parts of the work, because to do anything else would be to ‘drift into research’ and ask too much of the authors: “If we put the onus on everybody out there who's trying to improve care to deal with that sophisticated question [of why and how the improvement occurred], I just think we are putting a barrier in place that is going to be a mountain” (participant 22, physician improvement consultant). These responses showed that if the development and iterations of a project and its failures were desired in reporting, then the Guidelines would need to make this explicit and provide guidance for how to incorporate it into a manuscript in SQUIRE 2.0.

Discussion

As the first step in the revision of the SQUIRE Guidelines, we have evaluated their current status by working directly with both the people who have used SQUIRE and an advisory group of thinkers in the field of healthcare improvement. To our knowledge, an evaluation of this type has not been done before for a publication guideline. We used this approach because we believed end users in particular would provide unique and important insights into the changes and challenges of the field, informing and strengthening the revision process.

At the most basic level, certain items in SQUIRE were interpreted by end users to be redundant, unclear or simply not comprehensible. If this had been the only issue, careful copy editing of the document and broadening of the explanation and elaboration document would be all that was needed. However, many of the comments and discussions around particular items illustrated broader issues, which we hypothesise are related to skill in writing, knowledge of the science of improvement and the evolution of the field. The development of SQUIRE 2.0 will require attention to these matters.

End users’ impressions of the SQUIRE Guidelines varied by their level of experience in both research and writing, and their skills and knowledge in the field of healthcare improvement in general. Those with more experience felt the Guidelines were easier to understand than those with less experience. We interpret these findings to reflect the hard work of learning to write for the scientific literature as well as the challenge of writing about healthcare improvement. This type of writing has many dimensions to bring to the written form, such as time dependence and contextual issues, for which we lack well-defined scientific language and methods.

A potential response to end users’ descriptions of the challenge of writing might be to say that the role of SQUIRE is to urge people to do high-quality healthcare improvement work and report it completely, not teach people to write. However, the relative newness of the field of scholarly writing about improvement, and the fact that one of the explicit goals of SQUIRE was to advance the quality of such writings3 suggests that attention should be paid to the work of teaching the skill of scholarly writing. We endorse the goals of the developers of the Equator Network, which is to provide students and new researchers with guidance on what constitutes good research practice and reporting, as educating the next generation of researchers will help move the field forward.16 Thus, one important goal of SQUIRE 2.0 will be to support the development of skills in scholarly writing about improvement.

Disagreement over key concepts in healthcare improvement reporting was identified in end users and the advisory group of thinkers. Two areas came up specifically, the use of ‘theory’ to guide improvement work (and what the word ‘theory’ means), and whether iterations of improvement work (and failed iterations, in particular) should be reported. These disagreements may be simply due to evolution in the field, but may also be due to a lack of knowledge of the science of improvement. Whether it is one or both of these reasons, SQUIRE 2.0 must address these directly so that authors understand the need to report them. Given that SQUIRE is intended for writing—but reporting theory and the iterations of a work require attention during the design and execution of a study—one might wonder how SQUIRE could hope to help. Moher et al17 have reported that there is indirect evidence that publication guidelines affect researchers’ study design practices, a finding we confirmed in our study as participants reported SQUIRE being very useful for planning improvement work. Thus, there is reason to hope that a clear SQUIRE 2.0 would be able to support better reporting in these important areas.

The work we have presented here has limitations. Qualitative evaluations such as this one may not reflect the full range of relevant findings. To capture the widest range possible, we enrolled a wide range of participants until we reached thematic saturation—no additional findings were emerging from the data obtained in additional interviews or focus groups.13

Another challenge in qualitative evaluations relates to generalisability to other populations. To increase the chance that our findings would represent people writing in the field generally, we used maximum variation sampling.18 It is possible however that there are users of SQUIRE who have not yet published or are not otherwise locatable by internet and personal contact searches who are different from our participants. Readers may also be concerned that the presence of authors of the prior Guidelines during two of the focus groups would inhibit responses, but we did not find the responses during these focus groups to be substantially different from those of other participants. Lastly, since interviews were conducted in English, subtle nuances of meaning may have been difficult to capture from those who were not native English speakers.

This unique evaluation of the SQUIRE Guidelines by end users and an advisory group of thinkers in the field has revealed the areas requiring clarification and identified the information needs of users. General item clarity, the use of theory in guiding an intervention, the concept of studying the intervention and the reporting of iterations of an intervention will require attention in SQUIRE 2.0. Further, SQUIRE 2.0 should seek to be a source of reliable definitions and a resource for the important concepts noted above as the field evolves. The findings from this study will support the initial revision of the SQUIRE Guidelines, so that they can be tested and further revised prior to the release of SQUIRE 2.0 in the fall of 2015.

References

Footnotes

  • Twitter Follow Louise Davies at @louisedaviesmd

  • Contributors LD: conception and design, data acquisition, analysis, drafting of the manuscript, obtaining funding; PB, FD, DS: interpretation of data, critical revision of manuscript, obtaining funding; GO: conception and design, interpretation of data, critical revision of manuscript, obtaining funding.

  • Funding The Robert Wood Johnson Foundation (#70024), The UK Health Foundation (#7099).

  • Disclaimer The views expressed do not necessarily represent the views of the Robert Wood Johnson Foundation, The UK Health Foundation, The Department of Veterans Affairs or the United States Government.

  • Competing interests None declared.

  • Patient consent End user interview and focus group participants were informed at the time of the invitation and at the start of their meetings that the project was being conducted as research, that their voices would be recorded and their confidentiality protected during the research project. Documentation of written informed consent was not required by the CPHS, agreement to participate served as the consent.

  • Ethics approval  This study was approved by The Dartmouth Committee for the Protection of Human Subjects (CPHS) as research, exempt from human subjects review.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data sharing statement Raw data in de-identified form can be made available to requestors by direct request to the authors.