Article Text

Download PDFPDF

Standards for UNiversal reporting of patient Decision Aid Evaluation studies: the development of SUNDAE Checklist
  1. Karen R Sepucha1,
  2. Purva Abhyankar2,
  3. Aubri S Hoffman3,
  4. Hilary L Bekker4,
  5. Annie LeBlanc5,
  6. Carrie A Levin6,
  7. Mary Ropka7,
  8. Victoria A Shaffer8,
  9. Stacey L Sheridan9,
  10. Dawn Stacey10,11,
  11. Peep Stalmeier12,
  12. Ha Vo13,
  13. Celia E Wills14,
  14. Richard Thomson15
  1. 1 Division of General Internal Medicine, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts, USA
  2. 2 Faculty of Health Sciences and Sport, University of Stirling, Stirling, UK
  3. 3 Department of Health Services Research, University of Texas MD Anderson Cancer Center, Houston, Texas, USA
  4. 4 Leeds Institute of Health Sciences, University of Leeds, Leeds, UK
  5. 5 Department of Family Medicine and Emergency Medicine, Universite Laval Faculte de medecine, Quebec, Canada
  6. 6 Department of Research, Healthwise Inc, Boise, Idaho, USA
  7. 7 Public Health Sciences, University of Virginia School of Medicine, Charlottesville, Virginia, USA
  8. 8 Health Sciences and Psychological Sciences, University of Missouri, Columbia, Missouri, USA
  9. 9 The Reaching for High Value Care Team, Chapel Hill, North Carolina, USA
  10. 10 Ottawa Hospital Research Institute, University of Ottawa, Ottawa, Ontario, Canada
  11. 11 School of Nursing, University of Ottawa, Ottawa, Ontario, Canada
  12. 12 Health Evidence, Radboud University Nijmegen Medical Center, Nijmegen, The Netherlands
  13. 13 Division of General Internal Medicine, Massachusetts General Hospital, Boston, Massachusetts, USA
  14. 14 College of Nursing, Ohio State University, Columbus, Ohio, USA
  15. 15 Institute of Health and Society, Newcastle University, Newcastle upon Tyne, UK
  1. Correspondence to Dr Karen R Sepucha, Department of Medicine, Harvard Medical School, Massachusetts General Hospital, Boston, MA 02114, USA; ksepucha{at}mgh.harvard.edu

Abstract

Background Patient decision aids (PDAs) are evidence-based tools designed to help patients make specific and deliberated choices among healthcare options. The International Patient Decision Aid Standards (IPDAS) Collaboration review papers and Cochrane systematic review of PDAs have found significant gaps in the reporting of evaluations of PDAs, including poor or limited reporting of PDA content, development methods and delivery. This study sought to develop and reach consensus on reporting guidelines to improve the quality of publications evaluating PDAs.

Methods An international workgroup, consisting of members from IPDAS Collaboration, followed established methods to develop reporting guidelines for PDA evaluation studies. This paper describes the results from three completed phases: (1) planning, (2) drafting and (3) consensus, which included a modified, two-stage, online international Delphi process. The work was conducted over 2 years with bimonthly conference calls and three in-person meetings. The workgroup used input from these phases to produce a final set of recommended items in the form of a checklist.

Results The SUNDAE Checklist (Standards for UNiversal reporting of patient Decision Aid Evaluations) includes 26 items recommended for studies reporting evaluations of PDAs. In the two-stage Delphi process, 117/143 (82%) experts from 14 countries completed round 1 and 96/117 (82%) completed round 2. Respondents reached a high level of consensus on the importance of the items and indicated strong willingness to use the items when reporting PDA studies.

Conclusion The SUNDAE Checklist will help ensure that reports of PDA evaluation studies are understandable, transparent and of high quality. A separate Explanation and Elaboration publication provides additional details to support use of the checklist.

  • shared decision making
  • patient-centred care
  • checklists
  • patient education

This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Patient decision aids (PDAs) are evidence-based interventions designed to help people make informed and deliberated choices among healthcare options.1 2 At a minimum, PDAs provide accurate and unbiased information on options and relevant outcomes, help patients clarify their values and treatment preferences, and provide guidance in steps of decision making and deliberation.3 4 

The number and types of PDAs being developed, tested and reported in the literature have expanded considerably. The latest published update of the Cochrane systematic review of PDAs included 105 randomised trials.3 Compared with usual care, using PDAs improves patients’ knowledge of and expectations about healthcare outcomes, reduces decisional conflict, increases decision quality and increases patient participation in decision making.3 The International Patient Decision Aid Standards (IPDAS) Collaboration has synthesised evidence across professional and disciplinary domains to develop quality standards for PDA development. In 2013, the IPDAS Collaboration published 12 papers updating the definitions, evidence and emerging research areas for all quality dimensions.5–16

The evidence base for PDA effectiveness relies on accurate, complete and high-quality reports of evaluation studies. Despite the availability of reporting guidelines for a wide range of methodologies and interventions, investigators involved in the latest update of IPDAS and Cochrane reviews identified pervasive and significant gaps in published PDA evaluation studies. The problems included poor or limited reporting of the PDA content, development methods, delivery and evaluation methods.9 12 17 For example, the description of PDAs was often so limited that Cochrane reviewers had to obtain copies of the PDAs to determine their content and elements (eg, benefits and harms, patient stories, probabilities).18 Further, a systematic review of measures used in PDA evaluations found that the description of the outcome measures used was often inadequate, particularly for the core outcomes related to decision quality and decision-making process outcomes.17

Without comprehensive and clear reporting of the PDA content and evaluation methods, there is limited evidence to understand which PDAs work and in what context. To address these gaps, an international workgroup was formed to develop and reach consensus on a set of reporting guidelines to improve the quality of manuscripts describing PDA evaluation studies.

Methods

The development of SUNDAE Checklist, or Standards for UNiversal reporting of patient Decision Aid Evaluations, followed the recommended process for developing and disseminating reporting guidelines put forth by EQUATOR (Enhancing the QUAlity and Transparency Of health Research).19 20 The development process has five phases: (1) planning phase, (2) drafting phase, (3) consensus phase, (4) dissemination phase and (5) maintenance phase (see figure 1). The first three phases are complete and reported in this manuscript. The phases are listed generally in the order they were initiated; however, the process was iterative within and between phases rather than purely sequential.

Figure 1

SUNDAE guideline development process to date. EQUATOR, Enhancing the QUAlity and Transparency Of health Research; IPDAS, International Patient Decision Aid Standards; SUNDAE, Standards for UNiversal reporting of patient Decision Aid Evaluations.

Planning phase

The purpose of this phase was to secure resources, conduct a needs assessment, review existing guidelines, and set a timeline and scope for the project. A reporting guideline workgroup (IPDAS-RG), led by KRS and RT, was set up to facilitate the guideline development. The cochairs received support and endorsement from the IPDAS Steering Committee, and then invited authors from each of the 12 IPDAS chapters to join the workgroup. Fourteen experts in PDA research representing a range of countries and disciplines (medicine, nursing, psychology, decision science and health services research) agreed to participate in the IPDAS-RG. The workgroup communicated through regular conference calls and emails, and held three in-person meetings to enable discussion and debate over the course of the project.

The IPDAS-RG conducted a needs assessment to characterise the scope and range of issues that might be addressed with improved reporting of PDA evaluation studies. Workgroup members examined each of the 12 IPDAS core dimensions, and summarised gaps in current reporting, the significance of the problem, evidence supporting the inclusion of specific items on PDA evaluations and the challenges to full reporting. We grouped gaps into major categories (eg, terminology, decision aid components, research methods) and counted the number of chapters documenting a reporting gap in those categories.

Next, the workgroup reviewed and catalogued existing guidelines to determine whether the gaps could be addressed in full or in part by existing reporting guidelines. Relevant guidelines included CONSORT (CONsolidated Standards of Reporting Trials) and its extensions,21 TREND (Transparent Reporting of Evaluations with Nonrandomized Designs),22 STROBE (Strengthening The Reporting of Observational Studies in Epidemiology),23 TIDieR (Template for Intervention Description and Replication),24 CReDECI (Criteria for Reporting the Development and Evaluation of Complex Interventions in healthcare),25 SQUIRE (Standards for Quality Improvement Reporting Excellence),26 and STaRI (Standards for Reporting Implementation studies of complex interventions).27

The group prioritised issues for inclusion in the guidelines when there was (1) evidence that the item or limited reporting of the item impacted outcomes and/or interpretation of decision aid evaluation results; (2) inconsistent or limited reporting impacted the ability to conduct meta-analyses and/or systematic reviews to generate evidence; or (3) inconsistent or limited reporting, despite being included in existing reporting guidelines. The workgroup also agreed that the new guidelines would avoid overlap with other guidelines when possible, have language consistent with existing guidelines where items are similar, and focus on reporting studies of PDA evaluation. Although some of the items may be relevant for studies describing development or implementation of PDAs, the focus is on evaluation studies. Once the need for reporting guidelines was clarified, the workgroup submitted proposals to governmental agencies and foundations to secure funding for the work.

Drafting phase

The goal of this phase was to generate, test and refine an initial list of items to be included in the SUNDAE Checklist. As shown in figure 1, this phase had three steps. First, the workgroup used the findings from its needs assessment and review of existing guidelines to draft emerging items. Second, the workgroup came together in the first in-person meeting to review and comment on the items. The workgroup discussed the importance, relevance, overlap with existing guidelines and feasibility of reporting the requested information for each item. Two outside experts, lead authors of SQUIRE and TIDieR reporting guidelines, also attended the meeting and provided advice on guideline development methodology and process. The meeting resulted in consensus on the scope and purpose of the guideline, a draft of the 83 potential items to be included, a clear plan for next steps and allocation of tasks.

Third, IPDAS-RG members pilot-tested the draft of the 83 checklist items on nine published manuscripts of PDA evaluation studies. The selected publications were authored by the workgroup members in order to enable assessment of whether the information was available but not reported, or not available. Each article had two reviewers, the article’s author and an independent reviewer. The reviewers assessed whether and where each proposed item was reported and the ease of finding each item. If items were not present, reviewers assessed whether the information could feasibly be reported in the manuscript. Results of the pilot study were discussed among the workgroup, and revisions were made to the items to improve clarity and feasibility.

Consensus phase

In this phase, we engaged a wide range of experts to review and comment on the draft items for the SUNDAE Checklist. First, we conducted a 90 min workshop at the International Shared Decision Making Conference (July 2015) to test the draft items, explore areas of disagreement and address any gaps. Attendees raised several issues that the IPDAS-RG had also discussed, including the potential overlap with existing guidelines, feasibility of including the information in an evaluation paper, potential overlap with work being done on developing guidelines for PDA development process28 and relationship to IPDAS.

Second, we conducted a two-stage modified Delphi survey online with a wider network of international stakeholders to help prioritise items and build consensus around the checklist.15 17 We recruited expert stakeholders worldwide, including (1) researchers who have published or planned to publish evaluation studies of PDAs, (2) clinicians and administrators who read and appraise the PDA studies, (3) patient/consumer advocates, (4) journal editors, (5) funders of PDA research, and (6) other reporting guideline developers. We sent invitations through the IPDAS Collaboration group, Shared Decision Making Facebook page and listserv, Society for Medical Decision Making newsletter, and through direct outreach to experts from workgroup members. We also used a snowball recruitment method to increase the respondent base by asking potential participants to forward the invitation to their colleagues. We sent any subjects who expressed interest a link to the survey. We calculated response rates for the survey in each round by dividing the number of subjects who were sent the survey link by the number who completed the survey.

For the Delphi process, draft items were rated on a 9-point scale. The a priori rule of ‘disagreement’ was defined as having 30% or more responses in the lowest tertile (1–3) and 30% or more in the highest tertile (7–9). Because none of the items met the a priori threshold for disagreement, the workgroup flagged items that received less than 80% of importance ratings in the highest tertile (7–9) for review and discussion.29 Open-ended comments were reviewed and used to help interpret scores and revise items. In addition, we considered the evidence that the items impacted outcomes, the importance of the items for meta-analyses and/or systematic reviews and, for items that overlapped with other checklists, the importance of including the items due to misreporting or poor reporting in the existing literature. We considered weighting responses by stakeholder group, but decided not to because the majority of participants (82%) indicated considerable expertise in two or more areas, making it difficult to assign participants to one stakeholder group. We also examined whether responses of the IPDAS-RG workgroup respondents differed from others.

In round 1 of the Delphi, participants rated the importance (1=not at all, 9=extremely) and the clarity of wording (1=not at all, 9=completely) for 51 items. For their ratings, participants were encouraged to consider whether the interpretation, analysis or appraisal of the paper would be adversely affected if the information was not included in the manuscript. We also encouraged participants to provide open-ended comments on each item and at the end of the survey. At the second in-person meeting (March 2016), the IPDAS-RG workgroup reviewed the round 1 data and revised the items for round 2.

In round 2, participants were provided feedback from round 1 and asked to rate the revised 31 items on the same 9-point importance scale and a 9-point essential scale (1=not at all, 9=absolutely). The essential scale was introduced in an attempt to spread responses and differentiate between items that should remain in the checklist and those that might be considered for removal. Open-ended comments were invited. In round 2, the survey also elicited participants’ perceptions of the checklist, including their intention to use the ‘reporting checklist’, with five statements rated on a 5-point scale ranging from 1=strongly agree to 5=strongly disagree. In the final in-person meeting (July 2016), the workgroup reviewed the data from round 2, discussed the ratings and comments on individual items, and finalised the SUNDAE Checklist.

Results

Planning phase

Reporting gaps identified by IPDAS chapters included lack of information about the PDA and its components (n=7 chapters); lack of information about the PDA development process (n=4); lack of information about implementation and sustainability (n=3); lack of clarity of terminology (n=2); and lack of information about measurement properties (n=1).

The review of relevant reporting guidelines found that the CONSORT, STROBE and TREND focused primarily on issues that would help readers understand the overall study rationale and design; issues of bias; and results and their implications. TIDieR and CReDECI focused on intervention characteristics, delivery and use. Finally, SQUIRE and STaRI addressed some of these same issues and additionally focused on intervention adaptation, delivery fidelity and potential financial conflicts. None of the existing reporting guidelines addressed issues of how an intervention might be accessed, key features of intervention content and design relevant to the efficacy of decision aids, or a full set of methods to understand how and why the intervention works.

Drafting phase

The workgroup generated a list of potential items and piloted the draft checklist on nine manuscripts. After this pilot, the workgroup removed, reworded and grouped items to reduce the number, and clarified the wording and organisation of items. Workgroup members then developed a brief rationale explaining each item, summarising evidence underlying the need for each item and outlining how accuracy and completeness of reporting support systematic reviews of the evidence for PDAs. The resulting draft checklist had 51 items and was used in round 1 of the Delphi process.

Consensus phase

In the Delphi process, 117/143 (82%) invited participants completed round 1. Round 2 was sent to all those who completed round 1. The majority, 96/117 (82%), completed round 2. Respondents represented 14 different countries, with about half from the USA (table 1). No item met the a priori definition for disagreement in either round. In round 1, 32/51 (63%) items had ≥80% of the importance ratings in the highest tertile (scores of 7–9), 17/51 (33%) items had 60%–79% of ratings in the highest tertile, and 2/51 (4%) items had 50%–59% of ratings in the highest tertile. Participant comments revealed concerns about lack of clarity of some items, the length of the checklist, redundancy among items within the checklist and redundancy of items with other checklists.

Table 1

Two-stage modified Delphi process participant characteristics*

The IPDAS-RG workgroup respondents (n=12) had similar scores to others (n=105). For the majority of individual items, 44/51 (86%), the median importance scores were similar (±0.5 points), and as a result we did not separate the workgroup responses from the rest of the respondents. After detailed discussion of each item, the revised checklist had 31 items.

The ratings from round 2 indicated even higher consensus as 24/31 (77%) items had ≥80% of importance ratings in the highest tertile (7–9) and 7/31 (23%) had 60%–79% of importance ratings in the highest tertile. There were fewer open-ended comments in round 2. Many comments were positive about the reduced length and clarity of items. Only a few raised concerns about redundancy. Table 2 presents the finalised SUNDAE Checklist with 26 items, and an accompanying glossary is included as an online supplementary appendix.

Supplementary file 1

Table 2

SUNDAE Checklist

Willingness to use checklist

Virtually all respondents, 89/91 (98%), agreed that the checklist would improve the reporting of PDA studies, and 88/91 (97%) agreed they would recommend it to colleagues or students. The majority indicated high intentions to use it when preparing manuscripts (80/82, 98%), designing studies (75/78, 96%) or supporting peer review of manuscripts (79/84, 94%).

Discussion

Key findings

The reporting guidelines included in the SUNDAE Checklist are the result of a rigorous, iterative development process and are supported by substantial international consensus. While some of the items were included due to a strong evidence base linking their reporting to improved outcomes or reduced reporting bias, others were included to support systematic reviews and meta-analyses in efforts to advance the evidence base. Some items included in other reporting guidelines were adopted because they were specific to PDA evaluation studies and reflected factors likely to impact PDA effectiveness. The final guidelines reflect both evidence-based and expert consensus-based items to improve the reporting of evaluation studies of PDAs.

Purpose and scope

The purpose of SUNDAE is to improve the transparency, quality and completeness of reporting evaluation studies of PDAs. Evaluation studies may include efficacy studies, comparative effectiveness studies or studies testing components of PDAs with a range of study designs. The checklist is not designed for manuscripts where the sole purpose is reporting the development or the implementation of PDAs, nor the results of systematic reviews or meta-analyses of PDA studies. The checklist covers any decision support intervention that meets the definition of PDA adopted by the Cochrane systematic review of PDAs or the IPDAS Collaboration,2 3 including decision aids designed for use during the consultation with the healthcare provider.

Using the SUNDAE guidelines

The SUNDAE guidelines are intended to be used by researchers who are preparing to report a PDA evaluation study. As is the case with other reporting guidelines, it may also be helpful for researchers to review the guidelines when designing the study protocol, to ensure the recommended items are collected, and thus able to be reported. Researchers are encouraged to review item 11 (table 2), which covers the description of the decision aid, as it is important that the intervention meets the definition for a decision aid based on the Cochrane Collaborative Systematic Review definition.3 The guidelines represent recommendations for reporting, and there may be editorial and other constraints that limit authors’ ability to include all items in a paper. To overcome this challenge, researchers may reference other publications or sources if the relevant material is available elsewhere. Further, researchers are encouraged to consider using supplementary materials to maintain manuscript brevity while providing access to important details (eg, an online table describing key features of the decision aid and how it was developed). The accompanying Explanation and Elaboration (E&E) document provides published examples of how authors have included this information in a parsimonious manner.30

Relationship to other guidelines

Over recent years there has been a major international effort for reporting guidelines development, and this work has led to the creation of the EQUATOR network.19 20 The SUNDAE Checklist is registered on EQUATOR and was carefully designed to minimise repetition of items that are standard in other reporting checklists (eg, it does not include items that recommend reporting details of main findings/primary outcomes). Therefore, the SUNDAE Checklist is intended to supplement other reporting guidelines, not serve as a stand-alone checklist. It is expected that authors will consult other relevant reporting guidelines as needed (eg, CONSORT for randomised control trials, TREND for non-randomised trials of behavioural interventions, STROBE for cohort or case–control studies).21–23 Researchers preparing publications evaluating PDAs may also find it helpful to review guidelines for describing complex interventions (eg, TIDieR or CreDECI).24 25

Achieving consensus in multidisciplinary community

Research in PDAs exemplifies interdisciplinary applied science. The development, evaluation and implementation of PDAs draw on theories, knowledge and methods from several academic and professional disciplines, including, but not limited to, medicine, nursing, psychology, decision science, health informatics, implementation science and health services research. The workgroup members came from varied disciplines, as did the Delphi respondents, and the items that had the most varied ratings in the Delphi often marked ones that had different disciplinary approaches (eg, item 15, dealing with process evaluation; and item 16, dealing with use of theory, frameworks or models). The workgroup is eager to learn from users to continue to refine and advance the items, with a view to future modification.

Strengths and limitations

The key strengths of SUNDAE include the detailed and rigorous development process, the multidisciplinary and international IPDAS-RG workgroup, and the consistently high ratings of items across a large international group of experts. There are some limitations, as with most reporting guidelines. Although there was wide engagement from international researchers in the Delphi process, the majority (80%) were from the USA, Canada and the UK, in part reflecting the leading countries involved to date in the development and evaluation of PDAs. There was limited input from consumers. As with other reporting guidelines, many of the guideline items are expert consensus-based, but this process drew on wide input from stakeholders with appropriate expertise.

Future work

The two remaining phases include (4) dissemination and (5) maintenance (figure 1). As part of the dissemination efforts, the SUNDAE Checklist will be available on multiple websites (including EQUATOR, IPDAS hosted by Ottawa Hospital Research Institute/University of Ottawa, Health Decision Sciences Center hosted by Massachusetts General Hospital, the US Agency for Healthcare Research and Quality, and the UK Health Foundation), and this guideline paper and the accompanying E&E will be open-access publications.

The IPDAS-RG encourages colleagues to use the checklist and to send the corresponding authors feedback on their experience. Those planning evaluation studies of PDAs may find the checklist valuable as they are designing the study (as is the case with other reporting guidelines). Journal editors may consider the checklist to provide guidance for authors and for reviewers. The IPDAS-RG workgroup plans to gather feedback on feasibility and utilisation of the checklist and will reconvene in 2 years to assess the need for revision and updates.

The SUNDAE Checklist and the guidance in the accompanying E&E document will help to improve reporting of PDAs. Clear, comprehensive and consistent reporting of evaluation studies will contribute to the evidence base and advance the collective understanding of the most effective ways to improve the quality of healthcare decisions.

Acknowledgments

We want to acknowledge the support of the IPDAS Steering Committee and chapter participants; the many participants in the Delphi process for their time and invaluable contributions; Greg Ogrinc and Tammy Hoffman for advice on methodology and development process; and Sarah Ivan for project support.

References

Footnotes

  • Contributors The writing team was led by KRS and RT and included PA and ASH. All authors were involved in the data acquisition, analysis and interpretation of data, and drafting and critical revision of the manuscript. All but one author (HV) was involved in the study design. Each has provided final approval of the version submitted, and the lead authors KRS and RT are accountable for the accuracy and integrity of the work presented.

  • Funding The in-person workgroup meetings were supported through grants from the UK’s Health Foundation (grant # 7444, RT) and the Agency for Healthcare Research and Quality’s small conference grant (1R13HS024250-01, KRS). ASH is funded by the shared decision-making collaborative of the Duncan Family Institute for Cancer Prevention and Risk Assessment at the University of Texas MD Anderson Cancer Center.

  • Competing interests KRS received salary support as a scientific advisory board member for the Informed Medical Decisions Foundation, which was part of Healthwise, a not-for-profit organisation that develops patient decision aids, from April 2014 to April 2017. CAL was employed by Healthwise from April 2014 to November 2016. VAS received personal fees from Merck Pharmaceuticals. During the last 36 months, SLS has received funding from the Agency for Health Services Research and Quality for a scoping review to identify a research agenda on shared decision making and high value care. During this time, she also completed unfunded research or papers on patient decision aid evaluations and developed the reaching for high value care toolkit, a toolkit of evidence briefs and resources on patient-centred high value care for all levels of system leaders. As part of those efforts and efforts on the current manuscripts, SLS has developed a series of research resources on reporting research. She is considering the potential benefits and harms of pursuing intellectual property protection for some of these efforts, but has not initiated these to date.

  • Provenance and peer review Not commissioned; externally peer reviewed.

Linked Articles