Article Text

Download PDFPDF

Advancing the science of patient decision aids through reporting guidelines
Free
  1. Robert J Volk1,
  2. Angela Coulter2
  1. 1 Department of Health Services Research, Division of Cancer Prevention & Population Sciences, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
  2. 2 Health Services Research Unit, Nuffield Department of Population Health, University of Oxford, Oxford, Oxfordshire, UK
  1. Correspondence to Professor Robert J Volk, Department of Health Services Research, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; bvolk{at}mdanderson.org

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Patient decision aids (PDAs) are tools designed to help people make deliberative choices about their healthcare options using the best available evidence. They provide balanced information about treatment choices and help patients construct, clarify and communicate what is important to them in making healthcare choices. PDAs can prepare patients to make informed, values-based decisions with their healthcare providers.1–3 The evidence base on PDAs has grown rapidly over the past two decades. The most recent update to the Cochrane systematic review of PDAs included 105 randomised controlled trials published through April 2015.2 This number excludes trials comparing complex to simpler PDAs and other evaluations of PDAs using non-randomised designs. People who use decision aids improve their knowledge of the options, report feeling clearer and better informed about the options, have more accurate expectations about benefits and harms of options, and participate more in decision-making compared with people who don’t use decision aids.2

Founded in 2003, the International Patient Decision Aid Standards (IPDAS) Collaboration is a multidisciplinary group of researchers, clinicians and other stakeholders from around the world who share an interest in the development and use of PDAs. A description of the IPDAS history, its membership and activities can be found at the IPDAS website (http://ipdas.ohri.ca/). It is a voluntary organisation that receives no centralised financial support for its efforts. The Collaboration was established in response to concerns about the rapid increase in the number of PDAs of unknown quality that were being developed by different individuals and groups around the world. The Collaboration prioritised the need for a set of standard criteria to guide the quality appraisal of PDAs. The purpose of IPDAS is to enhance the quality and effectiveness of PDAs by establishing a common evidence framework for the content, development, implementation and evaluation of PDAs.

The Collaboration’s initial task was development of a checklist of internationally approved criteria to determine the quality of PDAs (see table 1). Twelve quality dimensions were identified and workgroups conducted extensive evidence reviews, resulting in a series of background documents (http://ipdas.ohri.ca/resources.html). These documents were then used as the evidence sources in a modified Delphi consensus voting process involving >100 stakeholders from 12 countries to select a final set of criteria for the checklist.4 The final IPDAS Checklist includes 74 criteria, and a shorter version is used to report the quality of aids included in the Ottawa A to Z Decision Aid Inventory (https://decisionaid.ohri.ca/cochinvent.php). A second initiative was undertaken to provide a more precise, quantitative measure of a decision aid’s quality. The IPDAS instrument (or IPDASi) includes 10 dimensions with 47 items, and a shorter, 19-item version is available.5

Table 1

Achievements of the International Patient Decision Aid Standards (IPDAS) Collaboration

As the importance of certifying PDAs was recognised, the IPDAS Collaboration undertook a third initiative to identify a minimum set of standards that could be used to certify the quality of PDAs. A modified Delphi process involving >100 individuals with experience in decision aids from 16 countries was used to rate each criterion from the IPDASi on the basis of the potential for harmful bias in a patient’s decision-making if the criterion were not present or of low quality in a decision aid. From there, criteria were grouped into three broad categories: qualifying criteria, essential for a tool to be considered a PDA (6 items); certification criteria, necessary for an aid to be certified (6 items plus 4 specific to aids about screening); and quality criteria, including items not essential for reducing harms (28 items).6 Recognising how rapidly the evidence base informing the IPDAS Checklist was evolving, the Collaboration conducted an extensive update of the empirical and theoretical evidence from each of the 12 original quality dimensions. As has been the history of IPDAS, 102 individuals from 10 countries volunteered to participate in the updating effort and authored the updated background documents.7

Standards developed by IPDAS are now being used to support the development, certification and adoption of high-quality PDAs across the globe. In the USA, certification of PDAs has gained momentum both at the state and national level. Washington State has passed legislation that encourages shared decision-making conversations supported by the use of certified PDAs.8 Building from the IPDAS experience, Washington’s Health Care Authority launched a process for certifying PDAs in 2016. The National Quality Forum, a non-profit organisation that provides standards for measuring healthcare quality, relied on the Washington Health Care Authority experience and expertise of the IPDAS Collaboration in developing the document National Standards for the Certification of Patient Decision Aids with the goal of supporting a national certification effort (http://www.qualityforum.org). While no other country has yet developed a formal certification scheme for PDAs, IPDAS criteria have influenced the development and evaluation of these tools around the world. The Ottawa inventory includes decision aids from 10 countries (Australia, Canada, Finland, Germany, Hong Kong, Italy, the Netherlands, Saudi Arabia, Sri Lanka, the UK, the USA), and a recent overview of international developments in shared decision-making in 22 countries added five more countries where IPDAS has been influential (Denmark, Malaysia, Norway, Switzerland and Taiwan).9

In this issue of BMJ Quality & Safety, Sepucha and colleagues present the most recent products of the IPDAS Collaboration: new standards for reporting of PDA evaluation studies and a checklist for authors and journal editors.10 11 This international group of decision science researchers drew its members from the larger IPDAS Collaboration. Highlighting significant gaps in published PDA evaluations, the IPDAS reporting guidelines workgroup (IPDAS-RG) undertook a rigorous, multiphased, iterative development process adapted from Enhancing the QUAlity and Transparency Of Health Research in producing the new Standards for UNiversal reporting of patient Decision Aid Evaluations (SUNDAE) Checklist. Focusing on the 12 IPDAS quality dimensions and other reporting guidelines, the IPDAS-RG completed a comprehensive needs assessment and drafted an initial set of items for the checklist. A consensus process followed, where items were reviewed during a workshop at the 2015 ISDM conference followed by a two-stage Delphi process involving an international group of researchers, developers, clinicians, patient/consumer advocates, journal editors and guideline developers. The 26-item SUNDAE Checklist should greatly enhance the transparency and completeness of reporting PDA evaluations.

Why do reporting standards matter? Clear and consistent reporting of study methods and results will improve our understanding of the role of theory in impacting patient outcomes, allow for synthesis of findings from multiple studies addressing specific decision contexts, types of aids or patient populations, and support replication, to name a few reasons. The use of reporting standards should improve the quality of the evidence about the implementation and effectiveness needed to inform and modify certification criteria for PDAs. Implementation of PDAs remains a significant challenge. A better understanding of how aids are implemented in published evaluations, including when decision support was provided within the work flow (delivery channels), who delivered the aid, what format was used and how fidelity was assessed, is essential to improve insights into the best use of PDAs. An example is the quality improvement project by Mangla et al,12 which also appears in this issue of BMJ Quality & Safety reporting on a three-phased strategy for promoting the delivery of PDAs for hip and knee osteoarthritis, lumbar herniated disc and lumbar spinal stenosis.11 These authors followed the Standards for Quality Improvement Reporting Excellence Guidelines13 14 for quality improvement reports and provide a rich understanding of the methods used to deliver the aids.

Addressing conflicts of interest in publications about PDA evaluations will be particularly challenging for authors and journals. Authors are expected to disclose any interest in the options included in the aid or a financial interest in the decision aid itself. The SUNDAE developers appropriately define a spectrum of conflicts, including the following: professional interests, where the specialty of an author may have an interest in the choices a patient makes; financial interests, when the author may benefit from the sale or use of an aid, or from the options included in an aid; and intellectual interests, such as benefiting academically or intellectually by intangible personal gain. With many groups now producing PDAs, transparency is essential and we must be vigilant in ensuring the full range of conflicts are addressed in any published evaluations.

We encourage authors to include the SUNDAE Checklist with manuscripts they submit for publication when they report on evaluations of PDAs. There are also practical considerations in addressing the full checklist in a single publication. The IPDAS-RG members note that authors may choose to reference other publications or supplementary materials. To address this concern, we suggest that source documents become a best practice for decision aid developers and accompany release of PDAs.

The rapidly increasing pace of decision aid development means that formal evaluation and publication in academic journals will likely apply to only a minority of tools in future. However, we believe the SUNDAE guidelines should be observed by all decision aid developers. If PDAs are to move from the periphery into mainstream care, they must gain the trust of clinicians and patients. To achieve this, developers should provide clear information about their processes, making it available on websites or on request. The SUNDAE Checklist shows them how this can be done.

References

Footnotes

  • Contributors Both authors contributed to the conception of the paper, critically read and modified subsequent drafts, and approved the final version.

  • Funding This work was partially supported by a grant from The University of Texas MD Anderson Cancer Center Duncan Family Institute for Cancer Prevention and Risk Assessment (RJV).

  • Competing interests RJV and AC are members of the Steering Committee of the International Patient Decision Aid Standards Collaboration. From 1999 to 2016, AC undertook paid consultancy for the not-for-profit Informed Medical Decisions Foundation of Boston, USA, which carried out research and development on shared decision-making and patient decision aids.

  • Provenance and peer review Commissioned; internally peer reviewed.

Linked Articles