Article Text
Abstract
This Explanation and Elaboration (E&E) article expands on the 26 items in the Standards for UNiversal reporting of Decision Aid Evaluations guidelines. The E&E provides a rationale for each item and includes examples for how each item has been reported in published papers evaluating patient decision aids. The E&E focuses on items key to reporting studies evaluating patient decision aids and is intended to be illustrative rather than restrictive. Authors and reviewers may wish to use the E&E broadly to inform structuring of patient decision aid evaluation reports, or use it as a reference to obtain details about how to report individual checklist items.
- shared decision making
- patient-centred care
- checklists
- patient education
This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/
Statistics from Altmetric.com
Background
This Explanation and Elaboration (E&E) document provides authors with additional guidance and examples of how to report the 26 items included in the Standards for UNiversal reporting of Decision Aid Evaluations (SUNDAE) reporting guideline (see online appendix A).1 For each item, the E&E provides a brief rationale for the importance of that item, cross-referencing to other items as appropriate and including evidence for inclusion where available. It also provides selected examples, explains how those examples illustrate good reporting and notes any additional content that might further improve the quality of reporting.
Supplementary file 1
Development of the E&E built upon the methods used to develop the SUNDAE checklist.1 Evidence and definitions were drawn from the literature, including the 2014 update of the International Patient Decision Aid Standards (IPDAS) Collaboration guidelines and the 2014 and 2017 Cochrane reviews of patient decision aids (PDAs).2–4 Online appendix B provides a table summarising the types of rationale and evidence supporting the inclusion of each item in the checklist.
Supplementary file 2
The aim of the SUNDAE E&E is to support authors in demonstrating the rigour of their research through high-quality reporting. Previous reviews of PDA reports revealed notable gaps in reporting that limit the replicability of the studies, the reviews of the evidence supporting PDAs, the identification of appropriate PDAs for use clinically and the potential classification of tools that meet the minimum standards for certification.3 5–7 Improved reporting may support systematic reviews and, in turn, inform best practices and policies regarding certification and implementation of PDAs.
Using the SUNDAE explanation and elaboration document
Some readers may wish to read the whole document, but others may find it more useful as a reference document for individual items. Although the checklist items are organised by standard manuscript sections (eg, Introduction, Methods), we recognise that for some items there may be flexibility as to where in the paper they are included. For example, the listing of the options included within the PDA may be fully met within the Introduction or could be included in the Methods section.
We recognise that authors may not be able to include all items in one evaluation paper, particularly given some editorial restrictions. Some items may be addressed within an appendix or other supplemental material that have become increasingly available with online publications. Items related to development of the PDA may already be available in published papers or other reports of the development of the PDA (eg, needs assessments, usability/acceptability testing, pilot studies); in such cases, a reference to the development paper will meet reporting requirements. However, if space allows, we encourage brief reporting of items published elsewhere in the evaluation report, as these are often the only reports included in systematic reviews. Similarly, fidelity and process evaluation items may be included in the evaluation paper, or be published separately. Some items may be reported together within an individual article. For example, fidelity assessment may be seen as part of process evaluation (items 14 and 15) and reported together.
The examples were selected from published PDA evaluation reports to represent a variety of study designs, clinical/public health contexts and international writing styles. In some cases, multiple examples are included as individual examples may not be comprehensive. Longer examples were edited for length to exclude extraneous material, but excerpts were not edited to ensure consistent use of terminology or to improve the quality of reporting. We did not impose any other specific criteria in selecting examples for inclusion. These reporting guidelines are not meant to inappropriately constrain authors. There is a risk that strict adherence to guidelines of any sort may be counterproductive.8 While we recommend that authors pay careful attention to the guidelines, we also encourage authors to prepare their manuscripts as clearly and concisely as possible.
Explanation and elaboration of SUNDAE guidelines items
Title/Abstract
As part of a standard title and abstract:
Item 1
Use the term patient decision aid in the abstract to identify the intervention evaluated and, if possible, in the title.
A wide range of terms have been used in the literature on PDAs, often interchangeably. There is currently no distinct MESH heading for literature searching; however, the most widely used term is patient decision aid, as incorporated within the Cochrane review and the title of IPDAS.2 3 Other terms used include decision aid, patients’ decision aid and decision support intervention/technology. The term patient decision aid refers to evidence-based tools designed to help patients to participate in making specific and deliberated choices among healthcare options.4 The Delphi process strongly supported using the term patient decision aid for these reporting guidelines to support consistency and to ease identification of relevant studies when searching the literature.
Example 1
Evaluation of the effect of a patient decision aid about vasectomy on the decision-making process: a randomized trial.9
Example 2
Randomised controlled trial of a patient decision aid for colorectal cancer screening.10
Explanation
The chosen examples state in the title that a PDA has been evaluated.
Item 2
In the abstract, identify the main outcomes used to evaluate the patient decision aid.
Identifying the main outcomes can be a challenge in reporting PDA evaluations. Including the main outcomes in the abstract immediately helps the reader identify the key measures of impact. It also indicates the focus of the study (eg, on decision-making process, decision quality or some other measure, such as clinical outcomes or resource use). Further detail of the measures and instruments used should be included in the Methods section (see items 17 and 18). Including standard, descriptive terms for key outcomes in the abstract (eg, Decision Conflict Scale) will greatly assist with searching and indexing.11 12
Example 1
The primary outcome was informed choice (defined as adequate knowledge and consistency between attitudes and screening intentions)….13
Example 2
The primary outcome was patients' intention to undergo screening for prostate cancer, assessed immediately after reading the decision aid. In addition to giving their answer, patients were systematically asked to cite the reasons for those answers by responding to open-ended questions.14
Example 3
The aim of this study was to evaluate, in a factorial randomized controlled trial, whether simple (information video/leaflet) and complex (decision analysis) decision aids for treatment of hypertension were associated with changes in decisional conflict, anxiety, treatment intentions, and actual treatment choice in a sample of newly diagnosed hypertensive patients.15
Explanation
These examples clearly state the main outcomes. The first example incorporates an outcome measure of decision quality, as recommended by the IPDAS Collaboration.11 The second includes a measure of the patient’s intention, as well as seeking to capture reasons for the stated intention. The third example lists a range of measures.
Introduction
As part of standard introduction:
Item 3
Describe the decision that is the focus of the patient decision aid.
A critical component of a paper reporting an evaluation of a PDA is a description of the decision or decisions being supported by the PDA, to enable the reader to understand the clinical context and the intended audience for the intervention (see item 4). This was strongly confirmed in the Delphi process. The decision should be mentioned briefly in the Introduction, and/or Title, but expanded on in the Methods section, where the PDA is described in more detail (see items 10, 11 and 12).
Example 1
This paper describes … the Yorkshire Dialysis Decision Aid (YoDDA) booklet,16 and investigates (a) its acceptability to people making dialysis decisions, and (b) the feasibility of evaluating its effectiveness within usual care.17 [Introduction]
The YoDDA booklet is designed for people with worsening kidney disease, and their family members, to make informed decisions between 2 dialysis options delivered in 2 ways, in the context of their lifestyle: hemodialysis, in a medical centre or at home; peritoneal dialysis, at home in an automated or continuous ambulatory form.17 [Methods]
Example 2
The purpose of the present study was to determine whether the addition of the Decision Board to the medical consultation improved patient knowledge and satisfaction with decision making compared with the medical consultation alone for women with lymph node-negative breast cancer considering adjuvant chemotherapy.18 [Introduction]
[The Decision Board] contains detailed information tailored to the individual on a patient’s treatment choices (chemotherapy or no chemotherapy).18 [Methods]
Explanation
The examples demonstrate the decision being supported. Example 1 briefly refers to the decision in the Introduction (the YoDDA booklet). The reader can see this is a PDA about dialysis, but specification of the treatment options occurs in the Methods section (see item 10). Example 2 describes the decision as being about adjuvant chemotherapy in lymph node-negative breast cancer, and clarifies the options (chemotherapy/no chemotherapy) in the Methods section.
Item 4
Describe the intended user(s) of the patient decision aid.
A description of the intended user(s) of the PDA helps readers assess the generalisability of findings to practice, and understand who is expected to appropriately use the decision aid. This description should be available in the Abstract/Introduction, potentially with further detail in the Methods section (see items 13 and 19).
Example 1
For this study, we selected patients to be eligible for prostatectomy as well as radiotherapy. In most previous studies comparing prostate cancer treatments, patients’ characteristics differed. For example, surgery patients were often younger and had less advanced tumours then irradiated patients. By selecting, this study aimed to involve a more homogeneous population that actually had a choice. The aim of this study was to examine the effect of a decision aid on the treatment choice for localized prostate cancer in men who really have a choice.19
Example 2
Our objective was to evaluate the impact of a PDA on decisional conflict of middle-aged women who were considering NHPs [natural health products] for menopausal symptoms… Inclusion criteria were: (1) women aged from 45 to 64 years; (2) suffering from symptoms of menopause; (3) considering NHPs for their menopausal symptoms…20
Explanation
These examples describe the rationale for selecting participants for the evaluation study and linking it explicitly to the intended users of the PDA. Further details are given of the patients studied (especially in example 2). The study participants are patients at the same point in their care pathway as intended patient users in clinical practice.
Item 5
Summarise the need for the patient decision aid under evaluation.
It is important that PDA evaluation reports explain the need for the specific PDA so that readers and reviewers may assess the appropriateness and potential value of the intervention. Justification might include, for example, evidence that patients do not know or understand the options available or are making poor quality decisions, geographical variation in uptake of options suggesting underuse or overuse, mismatch between patient values and the options chosen or lack of support to make and implement decisions.
Example 1
Current guidelines no longer indicate a single treatment as the optimal treatment of localized prostate carcinoma. Therefore, patients should be involved in the treatment decision, which calls for the use of decision aids.19
Example 2
Benign prostatic hyperplasia (BPH) is a common condition affecting roughly 25% of older men. These men face a choice of "watchful waiting" or active treatment, either medical or surgical. Although prostatectomy rates have declined recently, this procedure remains the second most common major operation among Medicare-age men, with 311,000 performed in the United States in 1991. Moreover, considerable geographical variation has been reported for prostatectomy. A recent BPH practice guideline, noting these variations, has recommended a shared decision-making approach to treatment.21
Explanation
The above examples clearly state the need for each specific PDA. Both examples make it explicit that an intervention is needed to support decision making given that there is more than one reasonable option for treatment. Example 2 gives a broader rationale including the choice of options, evidence of geographical variation and recommendation from a published clinical guideline.
Item 6
Describe the purpose of the evaluation study with respect to the patient decision aid.
Consistent with other reporting guidelines, a clear statement of the purpose (aims/objectives) of the evaluation study helps readers and reviewers judge the appropriateness of the study design, outcomes used and data analysis as well as to critically appraise the findings of the study. This item is important specifically to PDAs because of the need to link the evaluation to the explicit and intended purpose of the PDA. For many evaluation reports, the aim of the study will be consistent with the purpose of the PDA; however, some reports may include evaluation within a broader study (eg, multicomponent interventions). This item should be found in the Introduction, but may well be expanded in the Methods section, particularly if the evaluation of the PDA is a subaim of the overall study (see items 13 and 15).
Example 1
The objective of this study was to estimate the effect of the Depression Choice decision aid on the quality of the decision-making process and depression outcomes … We hypothesized that its use during the clinical encounter would improve patient engagement, the quality of decision making as perceived by patients and clinicians, and depression outcomes.22
Example 2
This paper describes the development and field-testing process used to create the virtual decision lab, which had three primary objectives … The second objective was to test the [Options for Managing Your Knee Osteoarthritis Pain] web-based decision aid in terms of its performance compared with the video-booklet decision aid used in clinical practice.23
Explanation
The above examples clearly state the purpose of the evaluation of the PDA. Example 1 also highlights several helpful details about the evaluation: assessed with both decision process and outcome measures, as used during the clinical encounter, and by both patients and clinicians. These items would then be explained further in the Methods section (see items 13 and 15). Example 2 illustrates how evaluation of a specific PDA may be nested within a larger study (eg, multicomponent evaluation, dissemination and implementation study).
Methods
Studies with a comparator should also address items 7–13 for the comparator, if possible.
Item 7
Briefly describe the development process for the patient decision aid (and any comparator), or cite other documents that describe the development process. At a minimum, include the following:
Participation of stakeholders in its development
The process for gathering, selecting and appraising evidence to inform its content
Any testing that was done
The importance of systematic, rigorous and replicable development of PDAs has been summarised within the IPDAS programme.24 Furthermore, in the most recent Cochrane review of PDAs, only about half of PDAs reported having involved patients in their development process in some way.4 Expert consensus suggests that key features of the development process include participation of stakeholders in the development process, a high-quality process for gathering, selecting and appraising evidence to inform its content, and pilot testing of the intervention.25 A full description of the development process may be published separately (eg, in a protocol or development paper), but all evaluation reports should include a brief statement and/or reference to the development methods. This may include noting the theoretical framework, the process for gathering, selecting and appraising evidence, the inclusion of all stakeholders in development and reference to any formative studies (eg, pilot studies, acceptability studies). Development methods for comparators should also be described or referenced where possible; if ‘usual care’ is the comparator, this should be described.
Example 1
Details on the design, development and preliminary evaluation of the decision board for the surgical treatment of breast cancer are described elsewhere.26 The decision board was based on a systematic review of randomized trials comparing mastectomy to breast conserving therapy and qualitative interviews and focus groups with women with breast cancer and their surgeons regarding informational needs for decision making.27
Example 2
To ensure that our design process addressed multiple users’ needs, we formed a stakeholder advisory panel consisting of four patients, two clinicians, two decision scientists, two decision counselors, and two health informaticians. The advisory panel selected three publications to guide development. … Six iterative cycles of review and revision refined the paper and online prototypes. Groups of five patients walked through paper drafts of each component and were asked to comment on the wording, format, and visual layout. The drafts were revised in accordance with their comments, and iteratively presented to a new set of five patients, then revised again. Once feedback reached saturation, the advisory panel rereviewed the optimized paper drafts and approved them for programming … Four focus groups of patients (n = 4 each) iteratively reviewed the prototypes online. Finally, the advisory panel re-appraised the patient decision aid using the IPDAS Collaboration’s criteria, and approved the research platform for initial field-testing in the clinic.23
Explanation
Example 1 illustrates how a reference to a published paper on development, plus a brief summary statement, can provide enough information for readers to identify that a structured PDA development has taken place and to access the detail from the original paper, if needed. Example 2 provides more detail about the role of stakeholders, use of a framework and preliminary testing studies in preparing the PDA for the evaluation study.
Item 8
Identify the patient decision aid evaluated in the study (and any comparator) by including:
Name or information that enables it to be identified
Date and/or version number
How it can be accessed, if available.
Readers of reports of PDA evaluation studies should be able to uniquely identify and access the PDA, any accompanying interventions and/or comparators for several reasons. They may be interested to view them as part of interpreting the specific evaluation study; they may wish to implement them in practice; they may be involved in data extraction for systematic review or meta-analysis; and/or they may wish to confirm they meet the minimum characteristics of a PDA.28–30
Of 17 RCTs in a recent review of the quality of PDA reporting, only 2 included complete PDAs within the article, 2 referenced URLs providing complete PDAs and 3 referenced URLs where part of the PDA was provided.5 Seven articles required the authors to access the PDA to confirm the characteristics of the intervention.5 Being able to access and view PDAs can result in better assessments of study quality and provide more complete data for future meta-analyses.
This item could be completed in a number of ways, ideally by including the name and version number in the article and how the PDA can be accessed. To provide access, the PDA might be included as an appendix or online resource, by reference to another published paper that includes the PDA or by reference to another source (eg, website or database of PDAs). The referenced PDA should be the version of the PDA that was evaluated within the published study; although reference to an updated version may also be appropriate, it should be clear which version was evaluated. Such information may be found in different sections of the paper, primarily in the Introduction or Methods (see also items 12 and 13).
Example 1
The [colorectal cancer] screening decision aid, called CHOICE [Communicating Health Options through Interactive Computer Education, version 6.0W], was based on a previously validated videotape decision aid.17-18 The program is designed to be accessible to low-literacy patients by using easy-to-understand audio segments, video clips, graphics, and animations.31
Example 2
In preparation for our trial, we developed a decision aid [informed choice about breast cancer screening] (the intervention; appendix pp 3–14), then produced a control version for comparison (appendix pp 15–18).13
Example 3
Using the FRAX calculator,7–8 … our group developed an encounter decision aid in 2008, the Osteoporosis Choice decision aid, to facilitate shared decision making during the clinical encounter9 … we sought to determine the effect of the Osteoporosis Choice decision aid compared with usual care with and without the FRAX fracture risk calculator.
To facilitate the exploration of this tool in practice, our group has made freely available an electronic version of the tool for stand-alone use or integrated into the electronic medical record. The tool can be found here: http://osteoporosisdecisionaid.mayoclinic.org. 32
Explanation
The above examples demonstrate three approaches to reporting this item. Example 1 includes in-text mention of the PDA name, version number and references to earlier versions, and provides a screenshot of the PDA in the article. Example 2 is one in which the PDA and control are published as an appendix with the paper. Example 3 names the PDA and comparator, provides references for both and provides a URL for direct access to the PDA.
Item 9
Describe the format(s) of the patient decision aid (and any comparator) (e.g., paper, online, video).
PDAs come in a range of formats and media (eg, print, audio, video, digital), and various accompanying channels or modes of delivery (eg, in consultation, in the community, on the web) that may affect the reach, accessibility and interactivity of the PDA, as well as its usability, implementation and sustained use. See also item 13 on mode of delivery and we note that authors sometimes confuse format and mode of delivery. A few randomised studies suggest that the format of PDAs and accompanying interventions can affect decision-making outcomes.33 34 There also appears to be consensus that several factors intrinsic to format affect reach, accessibility, interactivity, tailoring of information and outcomes.12 35–40
Example 1
The study used a 2×2 factorial comparison of discussion and video formats for presenting men information about PSA testing…
Usual care (n=43): …
Discussion (n=45): Participants listened to a lecture which closely followed the content of the videotape The PSA Decision: What You Need to Know (PSA video), developed by the Foundation for Informed Medical Decision Making. The lecture took between 25 and 30 min. Following the lecture, participants were invited to ask questions and discuss the lecture content.
Video (n=46): Participants viewed the 25 min PSA video. The videotape was previously evaluated and described by Flood et al.9
Video and discussion (n=42): Participants viewed the 25 min PSA video. Following the videotape, participants were given an opportunity to ask questions and discuss the content of the videotape with a moderator. Group discussions following the video averaged 7 minutes in length.33
Example 2
Our intervention included a combined lifestyle and medication adherence intervention delivered in two alternate formats: counsellor-delivered or web-based. Participants in both arms received a computerized decision aid and then either 7 sessions of counseling from a counselor or 7 sessions of interactive tailored messaging on the web (up to 5.5 hours of interventional contact; see Fig. 2 [in original paper]). In designing the intervention, our goal was to deliver the same content in both formats. Thus, we designed the scripted counseling and written materials in the counselor arm to match the text of the web-based intervention and used the same sequencing of materials for both interventions.41
Explanation
The above examples state the alternate formats for delivery. Example 2 uses the same format of PDA in each arm, but alternate formats (eg, print and web) and modes (predominantly in person in the clinic or at home on the web) of the accompanying counselling intervention to support decision follow-through. These examples also directly address interactivity and tailoring, and other factors that might moderate the PDA effect (eg, content, time spent).
Item 10
List the options presented in the patient decision aid (and any comparator).
The 2013 IPDAS Collaboration guidelines update stated that, ‘it is important that PDAs present all the relevant options and the information about those options in a complete, unbiased and neutral manner that is sustained throughout the PDA’s content and format’.42 Explicitly listing the options ensures that readers can assess whether the intervention meets one of the qualifying criteria of a PDA (ie, presenting a decision about two or more medically relevant options, and whether all medically relevant and patient-relevant options are included, such as starting/changing/stopping active therapies, and/or no treatment, ‘watchful waiting’ or active surveillance).29 Additional detail (see items 11 and 12) may be provided in a figure (eg, screenshot of a web-based tool), appendix, reference to a developmental paper and/or URL. The authors should consider reporting any rationale for why potentially relevant options (including ‘no treatment’) were excluded from the PDA. For systematic review and meta-analysis, listing the options presented in each PDA also allows reviewers to assess the appropriateness of cross-comparisons of PDAs, studies and papers. It also allows a potential implementer to determine the applicability to their patient group or health system provision.
Example 1
The web-based decision aid … educated patients about CHD [coronary heart disease], their predicted global CHD risk, their risk factors, and the benefits and harms of the most effective risk reducing strategies (aspirin, cholesterol medication, hypertension medication, and smoking cessation)…43
Example 2
The decision aid was a decision support booklet structured in two parts. The first part provided information about the use of the decision aid, what the prostate is, what prostate cancer is, the stages and grades of cancer, treatment options (surgery, radiation therapy, and watchful waiting), …44
Explanation
Both examples explicitly list the treatment or risk reduction options included in the PDA.
Item 11
Indicate the components in the patient decision aid (and any comparator) including:
Explicit description of the decision*
Description of health problem*
Information on options and their benefits, harms and consequences*
Values clarification (implicit or explicit)*
Numerical probabilities
Tailoring of information or probabilities
Guidance in deliberation
Guidance in communication
Personal stories
Reading level or other strategies to help understanding
Other components.
*These components are needed to meet the definition of a patient decision aid.
Items 11 and 12 focus on the components of the PDA. Item 11 indicates the need to list the components. Item 12 indicates the need to describe the components. The rationale for listing the components is so that readers can assess whether it meets the criteria for categorisation as a PDA (starred components) and can readily see which components are included. A recent analysis of RCTs of PDAs showed that the majority meet the criteria for qualifying as a PDA.5 29 Explicit listing of the components also supports systematic reviews and meta-analyses, for example, exploring the contributions of different component parts of PDAs to effectiveness.
Example 1
The decision aid comprises an interactive computer program provided on a CD-ROM. It presents up-to-date, evidence-based information about abdominal aortic aneurysms and their treatment options, elective aneurysm surgery and watchful waiting, and the pros and cons of those treatment options, as is required by European law … For patients with aneurysms of at least 5.5 cm, the decision aid provided a comprehensive insight into the balance of benefit and harm of a surgical (open and endovascular) and a conservative approach, taking age, comorbidity, and size of the aneurysm into account. The program also includes a number of questions that invite the patient to clarify his or her preferences. For example, “To what extent would you be anxious or worried about rupture if you do not get surgical treatment?”45
Example 2
The decision aid [for patients with recently diagnosed prostate cancer] was a decision support booklet structured in two parts. The first part provided information about use of the decision aid, what the prostate is, what prostate cancer is, stages and grades of cancer, treatment options (surgery, radiation therapy, and watchful waiting); the chance of intermediate outcomes (benefits and risks of each treatment option); the chance of long-term outcomes for the three different options and potential adverse effects (eg, urinary incontinence, erectile dysfunction, and the specific effects of radiotherapy) associated with treatment choice. All of this information was summarized into tables, with the different risks and benefits of each treatment clearly outlined to ensure that patients could visually compare the differences.
In the second part, the decision aid included a section with examples of questions to ask health professionals, three short descriptions of the experiences of three patients who had chosen different treatments, clarification of the patients’ own values for each benefit and risk, and assistance in the final decision-making process. This last personal section included four steps to assist the patient in the decision-making process: (1) clarification of ideas, (2) identification of needs to make the decision, (3) exploration of needs, and (4) approach to steps to be taken.44
Explanation
Both examples succinctly state which components are included in the PDA and it can be seen that they meet the core criteria for a PDA. Additional detail (see item 12) may be provided in text, supplementary appendices or by web link to the PDA.
Item 12
Briefly describe the components from item 11 that are included in the patient decision aid (and any comparator) or cite other documents that describe the components.
Expanding on item 11, descriptions of the components of the PDA (and comparator, where relevant) allow for more in-depth consideration of the completeness and quality of the PDA and its component parts. Ideally, the description should be in sufficient detail such that readers of the report know exactly what was included, and systematic reviews comparing studies have adequate information to compare components and features. If sufficient detail cannot be provided within the evaluative report, additional information should be provided through referencing development paper(s) or providing access to the PDA (eg, figures, supplementary appendix, URL—see item 8). Items 11 and 12 that list and describe the interventions are usually reported in the Methods section.
Example 1
Based on the Ottawa Decision Support Framework, the advisory panel structured the patent decision aid in four deliberative steps.46
Step 1: Information comprehension … The decision aid presented up-to-date clinical information about the natural history of knee osteoarthritis, non-surgical options, surgical options and potential risks/benefits … The decision aid presented the clinical information at an overview level in plain language, with available audio voiceover. Patients who desired additional detail could choose interactive ‘More Information’ links. It then provided a side-by-side summary of the treatment options and attributes. Step 1 ended with two optional Personal Decision Activities, where patients could: (a) self-quiz their knowledge of the key facts and (b) document questions for their doctor.
Step 2: Values clarification … The narrator discussed the importance of considering whether some attributes of particular procedures are more important than others. Narrative examples illustrated this … Finally, step 2 presented two interactive Personal Decision Activities in which the patient could: (a) rate the importance of each option’s attributes on a 0-star to 5-star scale and (b) indicate an initially-favored option that best matched the attributes they valued most.
Step 3: Considering social resources … The narrator described strategies for managing positive and negative pressures to choose a particular option, and for communicating one’s preferences with others … Step 3 presented two interactive Personal Decision Activities in which the patient could: (a) list who else might be involved in the decision process and identify what the patient would like their role to be and (b) document specific questions they had for these individuals.
Step 4: Forming an action plan … The narrator discussed strategies for creating (a) short-term action items to address any gaps in information, clarity or personal support and (b) a long-term plan. Step 4 ended with an optional Personal Decision Activity, where patients could interactively create their personal short-and/or long-term action plans.
In closing, the website summarized participants’ responses into their printable Personal Decision Summary and provided links to references and related resources.23
Example 2
The therapeutic options presented on the DB [decision board for invasive treatment of primary or secondary carious lesions in pre-molars and molars] are no therapy, gold cast, amalgam, ceramic, simplified composite (bulk-filled QuiXfil; Dentsply, Konstanz, Germany) in combination with a self-adhesive bonding (XenoV; Dentsply) and composite restoration with incremental filling technique (Ceram●X mono; Dentsply) in combination with an etch-and-rinse adhesive (Optibond Fl; Kerr).
The factors shown on the DB are ‘survival rate’, ‘treatment time’, ‘costs’/‘self-payment’ and ‘characteristics’. The described criteria, except for time and cost, were based on reviews about survival rates(12,15) and comparison of material properties.(16) The ‘characteristics’ are substance loss, side effects and abrasion/mastication comfort.
The criterion ‘survival rate’ was presented in natural frequencies with positive and negative notation. According to the literature, this form of presentation is the most non-judgemental and comprehensible one from the patients’ point of view.(17) The treatment costs were calculated according to the national guidelines for medical fees for the statutory system and private health insurance funds (BEMA and GOZ).47
Explanation
Example 1 describes several components in detail including information on options and their attributes, guidance on communication (eg, support to list specific questions), values clarification exercises and action planning support. Example 2 includes evidence-based information on the options, on their risks and benefits, including numerical probabilities based on natural frequencies, and on the costs incurred.
Item 13
Describe the delivery of the patient decision aid (and any comparator) including details such as:
How it was delivered (eg, by whom and/or by what method)
To whom it was delivered
Where it was used
When it was used in the pathway of care
Any training to support delivery
Setting characteristics and system factors influencing its delivery.
PDAs are complex interventions and their delivery is as important as the components described in items 11 and 12. Several aspects of PDA delivery—for example, where, when, how, to whom and by whom it was delivered, and the characteristics of the setting in which it was delivered—can influence whether and how it is used by intended users, which in turn, may influence its efficacy and replicability. Details on all aspects of PDA delivery are required to enable other researchers to replicate or build on research findings and interpret whether the PDA was delivered and used as intended (see items 14 and 21), explain outcomes of the evaluation and assess generalisability to other health-service contexts. In addition, these details are required to enable decisions about policy, health service management, commissioning, design and cost.
Limited research has compared the effect of different delivery approaches on use and efficacy of the PDAs. An RCT by Jones et al (2009) found improvements in knowledge and a trend towards better acceptability and less decisional conflict when the PDA was delivered by clinicians during the visit than when it was delivered by clinician-researchers before the visit.48 Frosch et al.’s trial comparing delivery of PDA through internet or video found that participants receiving the PDA through video were more likely to review the materials than those receiving the PDA through the internet.49 See also item 9 (format and mode of delivery are interlinked but authors sometimes confuse them or use these terms interchangeably).
Example 1
Between November 2006 and June 2007, the decision aid [prostate cancer screening] was delivered in intervention sites through tablet computers made available in common gathering areas (e.g., break rooms, cafeterias). We elected to make computers available in public spaces, with the assumption that visibility would generate interest and, thus, promote participation. Each location afforded sufficient privacy so that DA users could sit individually and view the computer screen without their responses being seen by others. Headphones were provided. The DA was designed to be independently administered even for those with minimal or no computer skills. A health educator was available to provide assistance with computers if needed, although no individual required assistance, other than initial start-up of the program. We used multiple strategies to publicize and promote the intervention, including posters placed in high visibility areas, distribution of fliers, announcements made at regularly scheduled meetings and provision of small incentives (e.g., key ring flashlights). Computers were made available in work sites at prespecified days, based on agreements between management and study staff. The computers were available during the day, generally in 6-hour periods, based on managements' request. Each site had at least three computers available on site for a minimum of 15 days over the 3-month intervention period (roughly once per week). Men were allowed release-time from work to use the DA. Information was saved at each time of use; men could either complete the DA session at one time or return at multiple time points to complete it (mean time spent, 28 min). At the conclusion of the session, men were provided with a printed tailored report summarizing their estimated risk for [cancer of the prostate], assessment of pros/cons, decisional status and pages visited during DA use. This report was designed to facilitate communication about screening with primary care providers.50
Example 2
The intervention was a multifaceted program based on shared decision-making concepts. The program included physician training, a decision board for use during the consultation that was handed out to the patients after the medical encounter, and printed patient information that combined evidence-based knowledge about depression care with specific encouragement for patients to be active in the decision-making process. Physicians in the intervention group completed modules on guideline-concordant depression care. The modules also included content on enhancing skills for involving patients in the decision-making process during the medical encounter.
The theoretical framework for the shared decision-making portion of the modules was based on the work of Towle and Godolphin32 and Elwyn and colleagues.33–35 Specific aspects of the modules included specialized lectures with accompanying questions and discussion rounds, facilitation practice, role-playing and video exemplars of high-quality shared decision making. Standardized case vignettes and case studies from the general practice were used. The training took place within a 6-month time period, which included five scheduled training program events, each including four discrete modules. Attendance was consistently high: 17 physicians (85%) attended the first event, 15 (75%) the following two events, 16 (80%) attended the fourth event and 19 (95%) attended the last event. Eleven physicians (55%) attended all five events and nine (45%) at least three training sessions. Additional details about the conceptual basis of the training program, the program events, specific modules and evaluation of the training program is published elsewhere.36 All intervention physicians were given decision aids and patient information leaflets for dissemination to the patients.
The decision aid was used during the decision-making consultation. It contained details about the symptoms of the disease to certify the diagnoses, information about the treatment options, their pros and cons and a support for the patients’ value clarification. The patient information leaflet was based on the Clinical Practice Guideline on Depression in Primary Care of the Agency for Healthcare and Policy Research [www.ahrq.gov] and contained information about the diagnosis and therapy of the disease, addresses health beliefs, coping strategies, involvement of relatives and presents tips to foster the involvement of patients in the treatment decision making, e.g., patients’ preparation for the medical encounter.51
Example 3
Participants were mailed the relevant booklet [PDA for colorectal cancer screening] for their age and gender and a questionnaire which they were asked to complete and return. A faecal occult blood test kit was not included with the package, but information was provided about how to obtain one.52
Explanation
All three examples describe the delivery of the PDA by giving details on the delivery aspects applicable in their specific context. All give details on how the PDA was delivered—example 1, through readily accessible tablet computers to be used independently; example 2, a decision board; example 3 posted out to the participants. Example 1 describes ‘to whom’ the decision aid was delivered: to male staff.
They clearly describe ‘where’ the PDA was intended to be used; in example 1 on computers within the workplace; in example 2 in the clinic and in example 3 implicitly in the home setting. They describe when the PDA was used in the clinical pathway of care—in examples 1 and 3, the PDA on screening is delivered to individuals who have not yet entered a care pathway as they are considering whether or not to have screening. In example 2, the extract gives little detail about the patients’ place in the pathway, but it can be established from the information elsewhere in the paper that the PDA was to be used with patients newly diagnosed with depressive disorders during the consultation where a decision about treatment was to be made.
The examples describe whether or not health professionals/researchers were involved in the delivery, and, where so, describe any training they received to support delivery. In example 1, health professionals/researchers were not directly involved in delivery—the PDA was designed to be used independently, with assistance on computers available from health educators; example 2 is very clear about the physicians involved in delivering the PDA and extensively states the training delivered to them prior to using the PDA. In example 3, the patient is expected to review the PDA at home without explicit health professional or researcher input.
Finally, they describe aspects of the setting or system factors that may influence the PDA delivery beyond the immediate clinical or research setting in which it is delivered. Example 1 describes in detail various system and setting factors that characterised the context of PDA delivery, which may in turn influence the PDA update and use. For instance, they describe using a range of strategies to promote the PDA and to make it widely and easily accessible (eg, incentives, posters, fliers, announcements), having a high level of buy-in from the management in the form of release time for employees, availability of quiet space and computers. In example 2, it is apparent that the training component is a significant element in supporting the delivery of the PDA.
Item 14
Describe any methods used to assess the degree to which the patient decision aid was delivered and used as intended (also known as fidelity).
Fidelity is a key methodological requirement of any intervention study in order to show whether or not the intervention was delivered and used as planned and in the same way for all participants (sometimes called delivery and implementation fidelity). Implementation fidelity is a component of process evaluation and helps the reader assess why the intervention works or does not work (see item 15). This might include a description of the methods used to determine whether or not the PDA was viewed/read/used as planned and in some situations the length of exposure to the PDA (eg, number of minutes of the video observed; length of time using it or which components of an online PDA were accessed and for how long). Reporting how fidelity was assessed may enhance understanding of factors influencing success/failure of the PDA (see items 15, 21 and 24). This item may be reported together with item 15.
Example 1
The website monitored whether participants reviewed the assigned intervention [ prostate cancer screening PDA] before their appointments. Men who had not clicked on the assigned link within a week before their appointment received an email reminding them to review the intervention… . Rates of review of the educational materials in the four groups were also compared by logistic regression.53
Example 2
We also assessed, by reviewing the video-recorded encounters, the fidelity with which the decision aid was delivered and used as intended during these encounters using the osteoporosis fidelity checklist.54 This scale is comprises 10 items (present/absent scale), and results are presented as the percentage of items present.32
Example 3
In addition to a condition-specific educational pamphlet [prostate cancer PDA], participants received a maximum of two tailored telephone education calls within 1 month … by trained graduate-level health educators. Treatment fidelity checks were conducted on 44% of calls. Trained raters listened to audiotaped calls and checked whether key points were covered and the interventionist spoke at an appropriate pace, addressed questions and probed appropriately.55
Explanation
Example 1 describes limited monitoring of whether patients viewed the PDA, without specifying detail of viewing component parts, but allowing analysis to be undertaken to reveal differential access across comparison groups to support interpretation of results (see item 21). It also incorporates a mechanism to increase use. Example 2 describes use of a fidelity checklist applied to video recordings of the consultation to assess fidelity of use of an in-consultation PDA, as well as testing for contamination across into the control arm by capturing clinician behaviour. Example 3 describes the method for assuring fidelity of the tailored education telephone calls alongside the pamphlet PDA.
Item 15
Describe any methods used to understand how and why the patient decision aid works (also known as process evaluation) or cite other documents that describe the methods.
Process evaluation is increasingly recognised and recommended as a key component of evaluations of complex interventions, when the impact of the interventions may be highly dependent on the context within which they are delivered. It has been defined as ‘a study which aims to understand the functioning of an intervention, by examining implementation, mechanisms of impact, and contextual factors. Process evaluation is complementary to, but not a substitute for, high quality outcomes evaluation’.56 It is an assessment undertaken to understand how and why a PDA works or does not work in a specific study, and links back to fidelity in item 14. Process evaluations explore implementation issues and contextual factors within the trial. They help to distinguish between ineffective interventions (failure of intervention) and badly delivered interventions (implementation failure). They can illuminate the reasons behind effectiveness or ineffectiveness, thus potentially contribute to understanding the active ingredients of an intervention and the way that an intervention is actually delivered in practice. They may also help describe barriers and facilitators to implementation that may be of value to those who wish to implement the intervention in a different context. In some circumstances, they may allow for adaptation of the trial at an early stage to maximise the efficiency or quality of the evaluation.57 Process evaluations may be published within the report of the evaluation or sometimes as a separate report.
Example 1
The process evaluation of this study consisted of:
Open interviews with a sample of 15 patients who did and did not receive the allocated intervention … . A verbatim transcript was created for each interview. Coding and analysis was performed with the ATLAS.ti software package.
Researcher observation of clinicians discussing implementation of the intervention during clinical meetings, which were recorded in a notebook by a research assistant. Themes of interest were identified by the research team and further discussed with the clinical teams when necessary.
A questionnaire-based survey among clinicians consisting of three parts: (1) investigating their attitude towards shared decision making and the use of a web-based decision aid … ; (2) examining potential hampering factors for shared decision making … ; and (3) exploring to what extent clinicians considered patients to be capable and interested in shared decision making …
This process evaluation provided data to shed light on how well the intervention was implemented, to what extent the trial outcomes were related to the quality of the implementation and the setting in which it was implemented and what processes might have mediated these relations.58
Example 2
A parallel qualitative study, Thematic Observational Analysis of DARTSII [Decision Analysis in Routine Treatment Study II], was conducted alongside the RCT of the DARTSII decision support tool. Multiple methods were used to understand the interactional processes of the trial consultations and participants’ experiences and understandings of the trial and of any advice they were given. The first 30 participants recruited to the RCT were invited to take part in the qualitative study … , With participants’ consent, consultations (n=29) across the three arms of the trial were video recorded.
Within 5 days of the consultation, participants (n=30) were interviewed about general issues related to their experience of [atrial fibrillation], their experience of the consultation and their understanding of how, and what, treatment decisions were reached within it. Participants (n=26) were interviewed for a second time 90–100 days after the consultation. This interview elicited participants’ views of the specific consequences they attributed to the consultation, their post hoc evaluation of the decision reached and the extent to which they believed that their expectations had been met.59
Explanation
Example 1 was included within the report of the RCT and describes several complementary methods used. It includes a clear description of the methods applied, and a rationale for the process evaluation (what it was intended to add to the trial). Example 2 was published alongside, and cited within, the report of the linked RCT. Both describe mixed methods approaches with quantitative and qualitative elements (see item 23).
Item 16
Identify theories, models or frameworks used to guide the design of the evaluation and selection of study measures.
To enable researchers, PDA developers and service providers to interpret and build on the findings from PDA evaluation studies, it is important the authors describe the frameworks, models and/or theories used to evaluate their PDA, making explicit the links between the PDA and methods and measures used in the evaluation. These frameworks, models and/or theories guide the questions asked, measures used and interpretation of analysis and discussion.60 Theories enable us to understand, predict and change phenomena or processes by providing a framework within which to develop and test hypotheses, and interpret data.61 The rationale for which measures are used is key to understanding if the PDA worked to support people’s decision making, and engagement with health professionals, and also whether or not using a PDA impacts on healthcare and patient-reported outcomes. There is underuse and under-reporting of the theories used in developing and evaluating PDAs.3 6 Without a theoretical framework or rationale to guide the choice of measures, evidence about how and why PDAs work, and in what contexts, will continue to be limited. These findings are essential to understanding the mechanisms of why PDAs work in practice, and how they can be integrated into usual care.
Example 1
We developed and tested a decision support intervention based on the Ottawa Decision Support Framework, which provides an approach to supporting individuals in making high-quality decisions that are informed and consistent with their values.62 In the context of prostate cancer testing, we would add that a high-quality decision is one that is consistent with men’s preferences.20 The Ottawa framework identifies determinants of suboptimal healthcare decisions that may be modified by decision support interventions, including: problems with perceptions of the decision (eg, inadequate knowledge, unclear values, decisional conflict), perceptions of others (eg, limited knowledge of others’ opinions and practices, inadequate support), and personal and external resources to make the decision (eg, ability to talk with a physician).63 The present intervention addressed these problematic determinants of prostate cancer testing decisions. [Introduction]
[…]
We hypothesised that relative to men randomised to an attention control condition, men randomised to a prostate cancer decision support intervention condition would have: greater gains in knowledge about prostate cancer and prostate cancer testing, lower decision conflict, greater likelihood of talking with their doctor about prostate cancer testing and greater likelihood of acting on their intentions to test. [Introduction]
Knowledge was assessed with a 14-item index … . Decision conflict related to prostate cancer testing was measured using a modified version of the validated Decisional Conflict Scale (DCS).64 … At post-test, they reported whether they had visited their primary care physician and discussed testing since the pre-test interview … . At post-test, men were asked whether they had "decided to get tested in the future for prostate cancer" (no/yes). This measure of testing intention indicates men’s preference for testing or not testing.55 [Methods]
Example 2
According to Fuzzy Trace Theory,65 it is possible that participants made an initial decision at post-test based on their knowledge and attitudes at that time, and then forgot details by follow-up but remembered their general decision. These participants would, therefore, make a decision at follow-up based on their initial (post-test) knowledge and feelings. Three different ‘informed decision’ scores were calculated to account for the various ways participants may have arrived at an informed decision. Post-test knowledge, attitudes and intentions were used for the ‘post-test informed decision’ score. Follow-up knowledge, attitudes and behaviour were used for the ‘follow-up informed decision’ score. Finally, post-test knowledge and follow-up attitudes and behaviour were used for the ‘latent knowledge informed decision’ score.66
Explanation
These examples show that the authors used measures to evaluate the PDA in the context of the theories or frameworks that guided their PDA development and evaluation. Example 1 describes development based on the Ottawa Decision Support Framework,46 identifies targeted outcomes based on that and lists outcome measures appropriate to the framework. Example 2 quotes Fuzzy Trace Theory65 and explains the timing of measurement of informed decisions using measures of knowledge, attitudes and intentions or behaviour, at different time points postintervention. The specific frameworks or theories are named and referenced. Readers can trace the constructs and concepts assessed by each measure back to the conceptual framework.
Item 17
For all study measures used to assess the impact of the patient decision aid on patients, health professionals, organisation and health system:
Identify the measures
Indicate the timing of administration in relation to exposure to the PDA and healthcare interventions.
Item 17 links to item 18, reporting the name and properties of the specific instruments used. A description and rationale explaining the study measures enables readers to evaluate the quality of measurement at multiple levels. First, linking the measure to the theory or conceptual framework of the study allows readers to consider the appropriateness of the measure for addressing the aims of the study (eg, process vs outcome; proximal vs distal outcomes) in relation to the theory/framework (see item 16). Describing the administration of study measures allows readers to assess potential threats to internal design validity (eg, when ‘decision-making satisfaction’ is measured immediately after the PDA or measured after the PDA and physician consultation). This description also allows readers to weigh the scope and potential impact of the results and interpretation for patients, practitioners and/or public health. These descriptions also facilitate methods reviews to further evaluate the use, effectiveness and opportunities for improvement of the measures.
Example 1
After the office visit, subjects in the intervention groups completed a post-visit assessment, including a rating of the videotape presentation. Telephone follow-up assessments were conducted at 2 weeks after the baseline assessment and intervention. The same knowledge measure was administered at follow-up. … At the 2-week follow-up assessment, subjects in the intervention groups were asked the degree to which their preferences for PSA testing were influenced by the videotape and whether they would recommend it to other patients.67
Example 2
Research assistants … scheduled a study visit 1 hour before their next clinic visit. After viewing their randomly assigned video, participants completed postintervention questionnaires … . Postintervention questionnaires assessed participants’ knowledge, decision-making and screening behaviours. The low-literacy 10-item Decisional Conflict Scale and four subscale (Informed, Value Clarity, Support and Uncertainty) scores were summed (yes=0, unsure=2 and no=4) and scaled to a maximum of 100 points, with lower scores indicating less conflict.22 The 12-item Patient Self-Advocacy Scale was scored (yes=1, unsure=2, no=3), summed and divided by 12 for an average score, with lower scores indicating greater self-advocacy.23 Chart review at three months after the study visit confirmed colorectal cancer screening test orders and completion.68
Example 3
We measured three types of outcomes in this study: effectiveness, acceptability, and cost-effectiveness. Effectiveness outcomes were divided into three categories. Our primary effectiveness outcome was change in 10-year predicted CHD [coronary heart disease] risk in participants without known CVD [cardiovascular disease]. Secondary outcomes were measured in participants both without and with known CVD and included changes in blood pressure, cholesterol, aspirin use, medication adherence, dietary behaviours, and physical activity. Tertiary outcomes were measured in all participants and included weight, body mass index (BMI), general quality of life and outcomes related to possible harms (liver function tests (LFTs) and creatinine (Cr)). All outcomes were measured at both 4 months (primary timeframe) and 12 months in both study arms. The details of measurements are described below … .41
Explanation
Example 1 is clear about the timing of the measures with respect to the index clinic visit and viewing of the PDA, although it does not clearly identify the measure(s). Example 2 clearly identifies the measures used and states that they were administered after viewing the PDA but before the initial clinic visit. Example 3 lists the range of measures used, categorised by type of outcome, and their timing, and indicates that further detail is included later (see item 18).
Item 18
For any instruments used:
Name the instrument and the version (if applicable)
Briefly describe the psychometric properties, or cite other documents.
Descriptions of the instruments used and their psychometric performance are essential for enabling readers to assess the appropriateness of the instrument for capturing the intended constructs, the use of the instrument relative to its original design and the interpretation of results. However, a critical appraisal7 of the PDAs included in the updated Cochrane review3 4 of PDAs noted that very few studies reported the psychometric properties of the measures. Even for a ‘valid and reliable’ measure, care should be taken to report if it has been validated in the specific study population and context, and/or whether any modifications have been made to the instrument or scoring. A full description of the instruments used is recommended,7 either in the text or as an appendix; however, citations may suffice for well-established measures, provided that no instrument modifications have been made.
Example 1
The study website presented the three postdecision aid scales (see table 1 [in original paper] for psychometric properties). … . The Osteoarthritis Decision Quality Index, Knowledge Subscale contains five multiple-choice items assessing understanding of key facts about the treatment options.69 The interactive capabilities of the web-based research platform allowed for adaption of the paper version to provide interactive corrective feedback (ie, if an incorrect answer was selected, the correct answer was presented). … The website then presented the 11-item Preparation for Decision Making Scale and the 10-item low-literacy DCS.23
Example 2
We administered the standard 16-item version and a 10-item low-literacy version of the DCS64 at the high-literacy and low-literacy sites, respectively. The standard version includes five subscales: (1) uncertainty or lack of assuredness about the decision, (2) feeling informed about the options and their benefits and risks, (3) feeling clear about one’s personal values in making the decision, (4) feeling social support in making the decision and (5) feelings of having made an effective decision and planning to follow through. The subscales have excellent internal-consistency reliability (alpha range: 0.78–0.92) and construct validity. The low-literacy version uses a question-and-answer format with three response options: ‘yes’, ‘no’ or ‘unsure’. The version has good internal-consistency reliability (alpha, 0.86) and evidence of responsiveness to change after a decision aid is delivered. Scoring conventions followed the new DCS manual, wherein scores are expressed on a 0 (low decisional conflict) to 100 (high decisional conflict) scale.70
Explanation
Example 1 illustrates a concise statement that provides the essential information plus reference to more detail in a table (provided in the original paper). The brief summary of the instruments may be sufficient for readers who are well-versed with these instruments, and the specific versions are referenced for researchers who wish to explore in more detail or replicate the study. The more detailed psychometric properties table allows readers to more deeply explore the theoretical constructs of the instruments and their quality. Alternatively, example 2 illustrates a concise paragraph providing the definition, theoretical constructs, potential responsiveness to PDAs, versions administered, psychometric performance, scoring and interpretation.
Results
In addition to standard reporting of results:
Item 19
Describe the characteristics of the patient, family and carer population(s) (eg, health literacy, numeracy, prior experience with treatment options) that may affect patient decision aid outcomes.
Reporting the characteristics of patients, family and carers receiving PDAs in evaluation studies (eg, age, sex, health literacy, numeracy, race/ethnicity) is important for several reasons. Such characteristics may affect the effectiveness of the PDA. For example, lower health literacy is associated with lower knowledge scores, less desire for involvement in decision making and higher decisional conflict and regret scores.71 Nonetheless, the 2014 IPDAS evidence review found that 90% of PDA trials did not report health literacy and readability, although studies that address health literacy show increased knowledge and informed choice.71 Similarly, race/ethnicity has been associated with differences in preferences for screening/treatment, leading to development of PDAs tailored to different cultural contexts.68 72 73
Reporting participant characteristics also allows readers and reviewers to assess the generalisability of the study results. Reporting of participant characteristics overall, by study arm and by important subgroups allows readers to assess the potential for selection bias or confounding, and can highlight populations within which the PDA may have differential effects (effect modification or moderation). Reporting is important because it also allows readers and reviewers to assess whether there may be important subgroups who may benefit more/less from the PDA. While most articles include sample characteristics in a table, it is important to report whether any analyses were done to test for differences among individuals who accepted/declined participation and to test for potential subgroup interactions with the study outcomes. Explanations of any observed differences may be reported in the discussion or limitations (see item 24).
Additionally, reporting is needed for characteristics that are suspected to interact with decision making, to allow reviewers to identify studies for meta-analyses. For example, as PDAs become increasingly available on the Internet, reporting participants’ level of digital comfort will facilitate assessments of web-based PDAs for individuals who are digitally naïve versus digitally savvy.12 Similarly, as patients become more familiar with shared decision making and/or suites of PDAs become available for progressive decisions (eg, chronic disease management), reporting patients' and caregivers’ familiarity with the decision context and decision self-efficacy will allow for analyses of PDAs that provide interactive levels of clinical information and/or deliberative support.23
Example 1
Table 2 [in original paper] presents bivariate relationships among sociodemographic and health characteristics across the range of primary and secondary outcomes. Older men were more likely to demonstrate decisional consistency and had higher decisional conflict. White race was associated with decreased decisional self-efficacy.50
Example 2
Table 2 [in original paper] summarises the participants’ [sociodemographic, cognitive and clinical] characteristics. Overall (n=126), study participants were primarily female, Caucasian, younger adults with college degrees and moderate knee pain. . … The study sample may have contributed to a type II error in terms of their high baseline familiarity [with their condition and treatment]. … Different results may be observed with PDAs that focus on the first decision in a chronic condition, or on clinical situations that are acute, life threatening or involve surrogate decision making.23
Example 3
There were no differences between those who participated and those who declined. … Income, education and race were significantly related to outcome variables at the bivariate level and were included in multivariate analyses in addition to gender. … Health literacy was significantly related to knowledge of health insurance information. … A similar pattern was seen for numeracy skills.74
Explanation
While patient characteristics are typically presented in a table, the above examples illustrate ways to clarify in the text whether/how patient characteristics may be related to the study outcomes. Example 1 presents the patient characteristics in a table in the Results showing about 90% of participants were white, non-Hispanic. The extract above (from the paper’s Discussion) explains how the sample population may not address the needs of an important subgroup of the target population and touches on the generalisability of the findings. Example 2 discusses how the characteristics of the sample (eg, chronic condition) may have contributed to the high observed knowledge, preparation and self-efficacy scores. Example 3 clearly reports data to assess selection bias and several participant characteristics that were associated with study outcomes.
Item 20
Describe any characteristics of the participating health professionals (eg, relevant training, usual care vs study professional, role in decision making) that may affect decision aid outcomes.
Reporting the characteristics of health professionals who are involved in evaluation studies (eg, training, seniority, coaching, meeting with patients, delivering PDAs) is important for several reasons. As part of the delivery of a complex intervention, their characteristics may affect the effectiveness of the PDA. For example, whether they have been trained in the delivery of the PDA or in shared decision making will have potential impact on effectiveness. Whether the clinicians involved are research staff employed for the study itself, or the patient’s usual responsible clinician may also impact on the patient, for example, it may influence whether patients see this as part of their normal care, thus supporting actual decision making (‘patienthood’), or whether they see themselves more as trial participants (‘volunteerism’) rather than patients considering active treatment choice. A secondary analysis of the Cochrane review PDA database suggested that patient knowledge is greater in ‘patienthood’ trials.75
To allow reviewers to ascertain the influence of health professionals on the effectiveness of the PDA, important characteristics about the background and training of the key health professionals involved in delivery (eg, role in decision-making process, experience) should be documented. Furthermore, this will provide those wishing to use the PDA in their clinic with information about the staff and/or training needed to successfully implement the PDA in a new practice.
Example 1
Two of the 10 physicians in the control group (20%) and 5 of the 15 in the intervention group (33%) were female [PDA for depression]. The mean age of the participating general practitioners was 48.4 years with a SD of 8.0 years (control group 47.4±7.2 years, intervention group 48.9±8.4 years). The average years professional experience was 13.0±7.0 years (control group 10.6±7.4 years, intervention group 14.3±6.7 years). Gender, age and professional experience did not differ significantly between study groups (P>0.10).51
Example 2
A total of 19 physiotherapists were involved in the trial [PDA for low back pain]. Twelve physiotherapists were present at the start of the trial, and seven were randomised to the decision support arm. Of the seven physiotherapists in the intervention arm, four had more than 6 years of experience, whereas of the five who were randomised to the control arm only one of them had more than 6 years of experience (table 1 [in original paper]). The other seven physiotherapists who joined the department after randomisation were allocated to the control arm and four of them had more than 6 years of experience.76
Explanation
Example 1 clearly reports the mean age, gender and years of experience of the physicians involved in the study, as well as showing that the characteristics of the physicians were associated with patient outcomes. Example 2 describes the physiotherapists who delivered the intervention and control arms, and includes information on experience that might be important in interpreting the study findings or using the PDA subsequently in practice.
Item 21
Report any results on the use of the patient decision aid:
How much and which components were used
Degree to which it was delivered and used as intended (also known as fidelity).
The impact of PDAs depends on the quality of the PDA and on whether, and how well, it was delivered and used. For a variety of reasons, PDAs, or parts of them, may not be delivered and/or used as they were initially intended. This is also known as fidelity of the intervention and has significant implications for the success of the PDA. If fidelity is assessed, as per item 14, the authors should report results of the extent to which the PDA was delivered and used as intended, to allow readers to critically appraise differences between how the PDA was intended to be used and how it was actually delivered and used, and the impact of any differences on the effectiveness of the intervention (positive or negative).
Regardless of fidelity, it is also valuable to have data on how the PDA was used, for example, whether all components were viewed and for how long. Such data are valuable to understand, for example, to what extent use may have impacted on measured effectiveness.
Such information can help (a) explain the study findings, that is, determine the extent to which the study outcomes were related to the quality of PDA delivery and rates and ways of PDA uptake; (b) make inferences about what processes may have mediated the relationship between the PDA intervention and study outcomes and (c) plan further studies or strategies to improve fidelity, uptake and adherence.12
Example 1
Allocation and reception of intervention
A total of 250 patients (n=124 intervention vs n=126 control) were included in the trial, of whom 73 completed the follow-up measurement and were included in the final analysis (response rate 29.2%). Of these 73 patients, 40 were in the intervention and 33 in the control condition. Of the 40 patients in the intervention condition who completed the follow-up measurement, 30 used the decision aid. A detailed overview of the flow of participants is presented in figure 4 [in original paper].58
Example 2
Contamination and fidelity
The appendix (online) describes the [fidelity] checklist and the results of the contamination and fidelity evaluation. Of 12 maximum points, encounters in the decision aid arm had a fidelity score of 8 (3–12), whereas encounters in the usual care arm had a contamination score of 1 (0–8). Usual care encounters of clinicians who had used the decision aid previously had a contamination score of 1 (0–6).54
Example 3
Although the patient narratives were available to all participants in the narrative conditions, not all participants chose to view the narratives while reviewing the [breast cancer] decision aid. Table 3 [see Table 1 included] describes the content of each narrative, indicates the proportion of times that narrative was viewed in both the text and video narrative conditions and displays the mean time spent on the webpage with that narrative. Note each narrative was located on a separate webpage that would open when users clicked on the content. This allowed us to track use of narrative information separately from information search in the rest of the decision aid. The narratives were accessed at a similar rate in both the text and video narrative conditions. However, participants in the video narrative conditions spent more time with the narratives than participants in the text narrative conditions.77
Explanation
In example 1, the authors provide precise data on the numbers of participants who received the PDA and those who actually used it. From these data, it is clear that, although the PDA was delivered to 40 participants, not all of them actually used it. The authors in example 2 used a checklist to assess fidelity in the intervention as well as control arms and reported the results of this assessment. The data suggest that the PDA was used as intended, and that there was no contamination in the usual care arm. Example 3 provides data on both the frequency and time spent by patients who accessed text or video narratives (similar rates of access but more time spent with video narratives).
Item 22
Report relevant results of any analyses conducted to understand how and why the patient decision aid works (also known as process evaluation).
Reporting the results of a process evaluation allows the reader to understand what happened with the delivery of the PDA within the context of the particular evaluation study. This enables the reader to put the results in context and interpret them with reference to how the PDA was used. Process evaluation thus allows an understanding of how and why the PDA worked (or did not work), including the factors that might explain or affect the impact (see item 15).
Example 1
In the process evaluation, we collected data to answer five questions about potential problems related to implementation and context [of a web-based PDA for people with a psychotic disorder]. …
The fourth question was: Could any problems be observed with fulfilment of the study protocol? Through researcher observation, several recurring themes were identified during clinical meetings in which the trial was discussed. Case managers sometimes were hesitant and felt troubled to invite intervention patients to make use of the decision tool. First, they were doubtful whether patients were able to handle either the computer program or participation in a research trial. Second, they were not sure that patients would benefit from the decision aid because not all treatment options included in the decision aid were actually offered by their organisation (eg, music therapy was listed among the treatment options, but no music therapy was currently offered because of absence of a music therapist). In addition, various clinicians reported that they were unsure when to discuss outcomes of the decision aid with their patients because not all conducted a formal treatment evaluation session with their patients following their ROM [routine outcome monitoring] assessment. Some only discussed ROM results within the clinical team and not directly with patients.
The fifth question was: Did patients experience any problems with the intervention that was not covered in the satisfaction questionnaire? Open interviews among patients who chose to use or not use the website provided some additional details on the process. First, all patients were initially informed about the decision aid by an information booklet and in a meeting with a research nurse, but most of them received additional explanation from their case manager. Some framed the decision aid predominantly within a research context ("by using the decision aid, you contribute to research"), whereas others described it as an attempt to improve services ("using the decision aid might help you reflect on the treatment you want"). This might have affected patients’ expectations of the intervention. Moreover, interviews revealed discrepancies between the policy of the local disease management programme and patients’ experiences in clinical practice. Most of the interviewed patients could not remember their ROM results being discussed with them and some could not remember whether a treatment plan was created.58
Example 2
Concerns about the participants’ use of the standard gamble exercise were first raised with researchers by a clinic doctor. He was worried that participants did not grasp the purpose of the exercise and reported difficulties working through the standard gamble with them. Initial qualitative analysis of the consultation videos and concomitant analysis of the interviews revealed that participants were confused about the use and purpose of the standard gamble. Further in-depth analysis of video and interview data … confirmed that participants experienced problems with both understanding and carrying out the standard gamble. [Results]
We examined the videos and post hoc interviews for confirming and disconfirming examples of these problems and determined that six of eight participants in this explicit arm experienced these problems, and were unable to carry out the standard gamble exercise. [Results]
On the basis of the analysis of the videos and interviews [a] decision was taken to discontinue the explicit arm of the trial on the basis of the qualitative analysis, which demonstrated that the standard gamble value-elicitation exercise was causing confusion and was unlikely to produce valid data on patient values. It was believed that it would be unethical to continue, and also that the results would be distorted or impossible to interpret.59 [Discussion]
Explanation
In the first example, this extracted section of the results clearly describes important contextual factors that might influence interpretation of the results or subsequent implementation efforts. With respect to the fulfilment of the protocol, the authors demonstrate that having a PDA with options that are not available within the health system may impact on use of the PDA. The interviews with patients illuminate two challenges. First, the way that patients position themselves (within a research context or as an aid to their decision making) that might influence the use or outcome of the PDA (which has also been demonstrated elsewhere and with some evidence from a subanalysis of Cochrane that this might be important for interpretation and implementation).75 78 Second, there were findings suggesting that the service aims were not being achieved as a result of patients not recalling their results or a treatment plan.
In the second example, the extract is from a process evaluation undertaken and published alongside the trial, which surprisingly revealed that one version of the PDA (a computerised PDA using standard gamble as an explicit values clarification technique), which had been developed in codesign with end users, was problematic for patients in the trial. This was sufficiently troublesome to lead to discontinuation of one arm of the trial on ethical grounds. The results are summarised in the above extract; the full paper includes quotes from interviews and direct observations from video recordings.
Item 23
Report any unanticipated positive or negative consequences of the patient decision aid.
Reporting unexpected consequences is an ethical imperative for researchers. Such consequences could be adverse (such as increased decisional conflict or additional service costs/utilisation) or beneficial (such as increased gains for subgroups of the population). Reporting of unanticipated positive and negative consequences supports systematic review and meta-analysis of the effectiveness of PDAs. It can also alert researchers, clinicians and policymakers to potential emerging areas of interest.
Example 1
Interaction Effects Between SDMI [shared decision- making intervention] and Cancer History
From the separate analyses for affected [past history of breast or ovarian cancer] and unaffected women [no past history]. … In the short term, … the SDMI [shared decision-making intervention] had no effect on affected or on unaffected women. In the long term, for unaffected women, beneficial effects were found on all outcome measures and most were significant. The effect sizes were larger for unaffected women compared with the whole group. … For affected women, insignificant detrimental effects were found on the above-mentioned outcomes for which an interaction effect was found.79 [Results]
Example 2
The study did not find a significant difference between using an implicit or explicit deliberative guidance approach, on average, for decisions about surgical versus non-surgical management of chronic knee osteoarthritis. However, results indicate that there are some subgroups of patients who exhibit different deliberative styles, in terms of information seeking and deliberative engagement. Higher levels of information-seeking and active-engagement were associated with lower decisional conflict levels. Higher levels of active-engagement were associated with higher levels of decision self-efficacy. …
This information may be useful to clinicians who wish to increase the patient-centeredness of their decision support interventions. … Assessing the match between patients’ needs and decision aid design elements may play an important role in addressing the marked variations observed in the rates of surgery for chronic conditions, such as knee osteoarthritis. It’s possible that observed geographic variations are genuinely warranted, if well-informed patients are receiving the care that they clearly value.80
Explanation
Example 1 is taken from a trial of a PDA that was developed for women who were BRCA1/2 mutation carriers, and therefore at increased risk of breast and ovarian cancer, who are faced with the choice between screening and prophylactic surgery for breasts and/or ovaries. The study sample included women who had not had cancer (unaffected) as well as those who had already experienced either breast or ovarian cancer (affected). The PDA (referred to as SDMI) showed an overall beneficial effect for unaffected women, whereas affected women tended to experience detrimental effects. This example clearly states that negative outcomes were encountered in the subgroup who had previous cancer, as shown by interaction analysis. The authors were surprised by this finding and addressed several potential explanations in the Discussion section but concluded that "it remains unclear why the SDMI is not effective in affected women".79
Example 2 illustrates the reporting of an unexpected benefit of the subgroup analyses: decision-making outcomes were improved when the web-based PDA’s interactive features enabled them to ‘match’ their preferred information-seeking/deliberative styles (ie, high/low engagement). Reporting these unexpected findings allows future PDA designers and researchers, as well as clinicians, to consider the potential value targeting and/or allowing patients to self-tailor the information and support they need.
Discussion
As part of the standard discussion section (summary of key findings, interpretation, limitations and conclusions):
Item 24
Discuss whether the patient decision aid worked as intended and interpret the results taking into account the specific context of the study including any process evaluation.
The discussion of the paper provides a summary of the findings with acknowledgement of the context and other factors that may have influenced the findings. For example, discussion should explain any findings on whether or how the PDA worked (or did not work) based on: use of the PDA or use as intended (fidelity; see items 14 and 21); analysis by patient or clinician characteristics (see items 19 and 20) and the context of the study, including any results from process evaluation (see items 15 and 22).
Example 1
Measuring decision quality as a composite measure was possible in this study. A quality decision, the ultimate goal of PDAs, is ideally measured by using patient’s score on the knowledge test as an indicator of being informed, and measuring the concordance between the informed patient’s values for outcomes of options and the actual choice of surgery (or non-surgery). In this study, patients exposed to the PDA intervention obtained significantly higher decision quality (56%) compared with those who received usual education (25%).81
Example 2
Not all patients in the intervention group were actually offered the possibility to use the decision aid and, more importantly, ROM [routine outcome monitoring] and treatment evaluation meetings in which the treatment plan was to be discussed in a process of shared decision making did not always take place. Moreover, interviews indicate that the Web-based intervention might have been framed differently to different patients, which may have shaped their expectations and affected their evaluation. An interesting finding in the process evaluation was that patients who perceived their involvement in medical decision making as low were judged by clinicians to be less capable of participating in decision making. This could imply that patients participate less because they are less capable. Nevertheless, we cannot rule out that patients participate less because clinicians consider them less capable and, therefore, provide less opportunities for patients to participate in decision making.58
Example 3
Our results show an effect size on decisional conflict that is comparable to other studies of decision aids, suggesting that the computerised decision aid does have a measurable and clinically important impact that is greater than the doctor-led paper-based guidelines.
Each clinic [intervention and control] was delivered by a single [different] doctor, raising the question as to whether the findings reflect the different interventions, the different doctors delivering the interventions or some combination of the two. In some respects this is a false distinction; we were evaluating a package of decision support, and we attempted to minimise any doctor-specific effect by training the doctors in the intervention and the desired mode of delivery.57
Explanation
Examples 1 and 3 clearly indicate that the PDA worked as intended in terms of the expected findings. Example 3 discusses several observations from the process evaluation that might impact on the interpretation of the findings, and would also be of value to others seeking to implement the PDA (or indeed seeking to implement other PDAs where similar context might be important). Example 3 also discusses the potential influence of the context on the process of delivering the PDA, and reflects on the importance, or otherwise, of this context.
Item 25
Discuss any implications of the results for patient decision aid development, research, implementation and theory, frameworks or models.
Identifying and discussing the implications of the study results helps readers understand the importance and potential impact of its findings. This item (specific to PDA studies) adds to the standard discussion items (ie, limitation, generalisability, interpretation) found in other reporting guidelines. Authors should place their findings in the context of what is already known about PDAs as well as, for example, current programme, policies, incentives, research and teaching initiatives that might support PDA implementation (see also item 24).
Example 1
Similar to previous investigators, we found that patients were more knowledgeable about the risks and benefits of various treatment alternatives for osteoarthritis of the hip or knee, were further along in their decision-making preconsultation, and had more confidence in knowing what questions to ask their surgeon, on the basis of their responses to our pre-consultation survey. Surgeons also believed that patients who engaged in shared decision making asked more appropriate questions and made more efficient use of their time during their office visit. These findings could facilitate greater adoption of shared decision-making methods among orthopaedic surgeons, although many issues remain to be resolved.
Despite the well-documented benefits of shared decision making tools, they are not commonly used in orthopaedic surgery. There are currently many barriers to adoption, including the costs and logistical challenges associated with the implementation of shared decision-making programs, lack of familiarity and training in shared decision-making methods among surgeons, and a limited comparative effectiveness research base available for developing decision aids. To facilitate widespread adoption of shared decision-making tools in orthopaedics, further work is needed to simplify and to reduce the cost of implementation, perhaps through the use of non-medically trained volunteers as coaches. Moreover, many healthcare stakeholders have portrayed shared decision making as reducing utilizationrates of elective surgical procedures such as total knee arthroplasty, which could make surgeons less eager to adopt these potentially value-enhancing tools, particularly in a fee-for-service payment system.82
Example 2
Implications for research and clinical practice. … For the use of PDAs, such as PANDAs [a PDA about glycaemic control], in routine clinical practice to become the accepted norm, the new GP [general practitioner] clinical commissioning groups will need to be aware of the benefits of the use of such aids to ensure that decision aids become a professional standard in, for example, newly commissioned pathways for a long-term condition such as diabetes. Investment will also be necessary for the development and the continuing evaluation of decision aid use, as well as for the training of all members of the multidisciplinary team in the importance and in the practical use of decision aids in primary care. Both the patient’s experience and patient/clinician satisfaction with the care received and provided is likely to be much improved if this professional standard is adopted by commissioning groups.83
Example 3
This study shows that the decision aid may be an effective way to support a screening policy that values informed choice and equity in access to informed choice, as opposed to policy focused on achieving high uptake. These results present an important dilemma for policy makers and healthcare providers on how to communicate to the public about screening.84
Example 4
When choosing visual aids to communicate statistical information, PtDA [patient decision aid] designers and providers should be aware of benefits and limitations of graphical representations—especially with more complex representations such as flowcharts. Incorporating comprehension checks into PtDAs would help identify misapprehension of graphically presented data and correct misunderstandings.85
Explanation
Example 1 describes the potential value of the findings for implementation of SDM with orthopaedic surgeons, and then discusses wider issues in terms of potential barriers to use of PDAs and SDM in orthopaedic practice. The authors make suggestions for facilitating widespread adoption. These refer to the need to simplify and reduce the cost of implementation and the posited influence of fee for service systems on surgical acceptability were a PDA to lead to reduced uptake of the surgical intervention. Example 2 (in a section headed ‘Implications for research and clinical practice’) highlights the importance of healthcare commissioners in the English NHS and their critical role to support uptake of PDAs, as well as acknowledging the need for investment. Examples 3 and 4 briefly discuss the implications of the results with example 3 highlighting the impact that results from their study could have on supporting new policies or challenging existing ones for screening that are based on encouraging uptake rather than informed patient choice. Example 4 illustrates how findings from the study could influence how PDA developers choose and incorporate visual elements in the development of their PDAs.
Conflict of interest
Item 26
All study authors should disclose if they have an interest (professional, financial or intellectual) in any of the options included in the patient decision aid or a financial interest in the decision aid itself.
Most journals require authors to state that they do not have conflicts of interest as a part of standard reporting. However, this is usually focused on financial aspects, such as funding, and may not specifically address professional or intellectual conflicts that are unique to PDAs. For example, potential interest in patients selecting one specific treatment option over others is unique to PDAs. Reporting of this item allows investigators, clinicians and readers to assess potential introduction of bias in the design, conduct, interpretation and reporting of studies of PDAs.
Example 1
Declaration of Personal Interests: Dartmouth-Hitchcock Medical Center and Cedars-Sinai Medical Center have a patent pending for a ‘System and Method of Communicating Predicted Medical Outcomes’, filed 3/34/10. Dr Corey Siegel, Dr Lori Siegel and Dr Marla Dubinsky are inventors. CS, ST, MS, and MD are consultants to Prometheus Labs. CS, ST, and MD are consultants for AbbVie, Janssen, Takeda and UCB. MS is a consultant for AbbVie and Janssen. DM is a consultant for Genentech, Janssen, Ferring, Merck, and UCB.
Declaration of Funding Interests: Dr Siegel is supported by AHRQ grant 1R01HS021747-01. There was no commercial support related to this project.86
Example 2
Competing interests: GE, M-AD and AB, authors on this paper, lead the Option Grid Collaborative and unincorporated association of individuals engaged in the development and dissemination of Option Grid decision aids for clinical encounters under the auspices of the Dartmouth Institute for Health Policy and Clinical Practice. All authors have completed the Unified Competing Interests form at http://www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author).87
Example 3
The Knowledge and Evaluation Research (KER) Unit at Mayo Clinic houses the processes of design and evaluation of decision aids, decides on topics of investigation, pursues funding, designs and conducts evaluation trials and reports their findings. Investigators at the KER Unit, including authors of this manuscript, do not receive funding from any for-profit pharmaceutical or device manufacturer, nor do they receive any royalties or other monetary benefits, directly or indirectly, from the use of the decision aids. The KER Unit makes effective decision aids available online free of charge at http://shareddecisions.mayoclinic.org. 88
Explanation
Example 1 provides a potential example of how one might address potential conflicts of interest that include the more traditional financial ones (eg, grant funding, industry consultation, salary), and go beyond that to intellectual issues (eg, patents of intellectual property). Example 2 addresses intellectual investment in a PDA among those who have developed it as a potential source of conflict of interest. Example 3 in addition addresses the issue of potential gains to be made from the PDA itself.
Summary and Conclusions
This E&E document provides descriptive rationale and illustrative examples of how to address each element included in the SUNDAE Checklist.90 This additional information can support authors in addressing the guidelines when preparing reports of evaluations of PDAs for publication. The SUNDAE Checklist and accompanying E&E may also be of value to journal editors who may wish to reference them in author guidelines and within guidance for reviewers.
This E&E, along with the SUNDAE Checklist, and appendix (ie, the table of types of evidence supporting the Checklist items) are also available on the IPDAS website (http://ipdas.ohri.ca/resources.html) to promote public access. The Checklist should be used alongside other relevant reporting guidelines, such as CONSORT-PRO89 (CONsolidated Standards of Reporting Trials Patient-Reported Outcomes) for RCTs reporting patient-reported outcomes or TIDieR90 (Template for Intervention Description and Replication) for describing interventions.
The IPDAS reporting guidelines workgroup will continue to monitor and improve the materials. To that end, the corresponding authors welcome feedback and comments, particularly from those who use the Checklist and E&E, so that we can ensure they are updated and improved over time.
Acknowledgments
The authors want to acknowledge the support of the IPDAS Steering Committee and chapter participants; the many participants in the Delphi process for their time and invaluable contributions; Greg Ogrinc and Tammy Hoffman for advice on methodology and development process and Sarah Ivan for project support.
References
Footnotes
Handling editor Kaveh G Shojania
Contributors The editorial writing team was led by KS and RT and included PA and AH. All listed authors contributed substantially to the writing of the manuscript. Each author was assigned and led drafting of specific items, and had the opportunity to review the manuscript prior to its submission. RT and AH were the leads for the manuscript and contributed to the conceptualisation, organisation and overall editing of the manuscript; preparation of the background and summary; and preparation of the appendices. SLS contributed additional review and summary of the evidence for the items.
Funding The in-person work group meetings were supported through grants from the United Kingdom’s Health Foundation (grant # 7444 Thomson PI) and the Agency for Healthcare Research and Quality’s Small Conference grant (1R13HS024250-01 Sepucha PI). ASH is funded by the Shared Decision-Making Collaborative of the Duncan Family Institute for Cancer Prevention and Risk Assessment at the University of Texas MD Anderson Cancer Center.
Competing interests KRS receives salary support as a scientific advisory board member for the Informed Medical Decisions Foundation, now part of Healthwise, a not-for-profit organisation that develops patient decision aids. VS received personal fees from Merck Pharmaceuticals. During the last 36 months, SS has received funding from the Agency for Health Services Research and Quality for a scoping review to identify a research agenda on shared decision making and high value care. During this time, she also completed unfunded research or papers on patient decision aid evaluations and developed the Reaching for High Value Care toolkit, a toolkit of evidence briefs and resources on patient-centred high value care for all levels of system leaders. As part of those efforts and efforts on the current manuscripts, SS has developed a series of research resources on reporting research. She is considering the potential benefits and harms of pursuing intellectual property protection for some of these efforts, but has not initiated these to date.
Provenance and peer review Not commissioned; externally peer reviewed.
Linked Articles
- Editorial
- Original research
- Research and reporting methodology