Article Text

other Versions

Choice architecture in physician–patient communication: a mixed-methods assessments of physicians’ competency
  1. Joanna Hart1,2,3,4,
  2. Kuldeep Yadav1,
  3. Stephanie Szymanski1,
  4. Amy Summer1,
  5. Aaron Tannenbaum1,2,
  6. Julian Zlatev5,
  7. David Daniels6,
  8. Scott D Halpern1,2,3,4
  1. 1Palliative and Advanced Illness Research Center, University of Pennsylvania, Philadelphia, Pennsylvania, USA
  2. 2Division of Pulmonary, Allergy, and Critical Care, Department of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
  3. 3Center for Health Incentives and Behavioral Economics, University of Pennsylvania, Philadelphia, Pennsylvania, USA
  4. 4Department of Medical Ethics and Health Policy, University of Pennsylvania, Philadelphia, Pennsylvania, USA
  5. 5Negotiation, Organizations & Markets Unit, Harvard Business School, Boston, Massachusetts, USA
  6. 6Department of Management, Business School, Hong Kong University, Hong Kong, Hong Kong
  1. Correspondence to Dr Joanna Hart, Palliative and Advanced Illness Research Center, Center for Health Incentives and Behavioral Economics, University of Pennsylvania, Philadelphia, PA 19104, USA; joanna.hart{at}pennmedicine.upenn.edu

Abstract

Background Clinicians’ use of choice architecture, or how they present options, systematically influences the choices made by patients and their surrogate decision makers. However, clinicians may incompletely understand this influence.

Objective To assess physicians’ abilities to predict how common choice frames influence people’s choices.

Methods We conducted a prospective mixed-methods study using a scenario-based competency questionnaire and semistructured interviews. Participants were senior resident physicians from a large health system. Of 160 eligible participants, 93 (58.1%) completed the scenario-based questionnaire and 15 completed the semistructured interview. The primary outcome was choice architecture competency, defined as the number of correct answers on the eight-item scenario-based choice architecture competency questionnaire. We generated the scenarios based on existing decision science literature and validated them using an online sample of lay participants. We then assessed senior resident physicians’ choice architecture competency using the questionnaire. We interviewed a subset of participating physicians to explore how they approached the scenario-based questions and their views on choice architecture in clinical medicine and medical education.

Results Physicians’ mean correct score was 4.85 (95% CI 4.59 to 5.11) out of 8 scenario-based questions. Regression models identified no associations between choice architecture competency and measured physician characteristics. Physicians found choice architecture highly relevant to clinical practice. They viewed the intentional use of choice architecture as acceptable and ethical, but felt they lacked sufficient training in the principles to do so.

Conclusion Clinicians assume the role of choice architect whether they realise it or not. Our results suggest that the majority of physicians have inadequate choice architecture competency. The uninformed use of choice architecture by clinicians may influence patients and family members in ways clinicians may not anticipate nor intend.

  • cognitive biases
  • communication
  • decision making
  • graduate medical education
  • human factors
http://creativecommons.org/licenses/by-nc/4.0/

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Shared decision making is a common and desirable component of clinical medicine that requires clinicians to guide patients and their surrogate decision makers through preference-sensitive healthcare decisions.1 Although expertise in communication is a core competency for clinicians, they may lack sufficient understanding of human decision making to guide choices purposefully and ethically. For example, clinicians may not be aware of how decision makers respond to particular choice presentations in predictable, scientifically established ways. Indeed, recent work has shown that professional groups who are regularly in positions to influence the choices of others lack such competency.2–4

Choice architecture refers to the environment in which people make decisions.5 6 The architect of a building creates a design that influences how people move throughout that physical space. There is no ‘neutral’ building design because the resulting environment, such as the placement of stairs relative to the elevators, invariably influences how people move about the building.5 Similarly, there is no neutral choice architecture because every manner of presenting choices shapes how decision makers evaluate and select options.7–9 The ways in which clinicians structure information about medical options influence how patients and their surrogates think about the available choices and make decisions, even when the clinician does not intend to influence the decision maker in any given direction.10 Therefore, clinicians must consider how their presentation of choices may persuade patients to select or avoid certain choices in their inevitable roles as choice architects.11

Physicians who are unable to understand choice architecture and its implications may present choices to patients and surrogates in a manner that influences their choices in ways the physicians did not recognise.12 Physicians’ duties to maximise the welfare of their patients means that they must consider the positive or negative consequences of their choice architecture, including subtle differences in how they present options or information. However, whether physicians are able to predict or even recognise these consequences is unknown. This study aims to assess the extent to which physicians are able to anticipate the influence of different choice presentations on decision makers, explore their perspectives on the applicability and ethical boundaries of choice architecture in healthcare, and examine the sources of their relevant knowledge of choice architecture.

Methods

Study design

From September 2016 to March 2017, we used a mixed-methods approach to assess physicians’ choice architecture competency (online supplemental appendix 1). First, we developed scenario-based questionnaire items to examine choice architecture competency using existing decision science literature. Second, we validated the questionnaire items through a randomised survey using an online research panel of laypersons. Third, we administered the final scenario-based questionnaire to senior resident physicians. These physicians all were in their final year of training in a clinical specialty that includes frequent communication with patients and surrogate decision makers. Fourth, we conducted semistructured interviews with a subset of participating physicians to explore their answers to the scenario-based questionnaire, the sources of their knowledge of choice architecture and their views on the ethical use of choice architecture in clinical medicine. Lastly, we conducted member checking among several interviewed physicians.

Supplemental material

All subjects provided informed consent to participate.

Development and validation of scenarios to assess choice architecture competency

First, we developed the scenario-based items for the choice architecture competency questionnaire based on existing decision science literature.6 7 13–17 We modified scenarios used in published experiments to make them applicable to the healthcare setting and evaluate physicians’ understanding of choice architecture. Each preliminary item was structured similarly: respondents would predict the relative effect of two choice environments (A vs B) on a decision maker or individual patient (online supplemental appendix 2).

Second, we tested the validity of our proposed correct answers for these proposed competency questionnaire items using online participants. Essentially, this step allowed us to confirm that the choice environments included in our final set of scenario-based items affected lay individuals in the manner we expected based on the decision science literature. We provide an example of a competency questionnaire item and the validation step items in online supplemental appendix 2. We recruited 269 online Amazon Mechanical Turk (MTurk) users who were fluent in written English and >18 years old. MTurk is a crowdsourcing platform commonly used for research because researchers can rapidly recruit individuals to complete surveys and tasks posted on the MTurk website.18 We provided them with nominal compensation in US dollars. Their mean age was 32.0 years (SD=2.4), and their median completion time was 21 min (IQR=12–37). Additional demographic information about the MTurk sample is detailed in online supplemental appendix 3. All MTurk participants viewed the three items assessing default effect, endowment effect and social norms. They were randomised to one of two choice environments (presentation 1 or presentation 2) and made a selection (online supplemental appendix 2) for the remaining seven items (online supplemental appendix 4). We compared their selections or responses for each item using the Student’s t-test (online supplemental appendix 4). If there was a statistically significant difference in MTurk participants’ selections in the direction supported by the published evidence, then we considered the proposed competency questionnaire item valid. We validated 7 of the 10 scenarios with the MTurk participants and included only these validated items in the physicians’ scenario-based competency questionnaire. We added an eighth item involving the frequency of dosing and medication adherence. Because this item is focused on behaviour over time (ie, adherence given different prescribed medication regimens), this item was not amenable to survey validation. We included this item based on prior empirical work on medication adherence that validates a single correct answer (ie, that daily medication regimens are associated with greater adherence than intermittent regimens).19 Of the final eight items, seven represent direct influences on decision makers’ choice behaviour (ie, choice architecture). The remaining item, anchoring bias, does not represent a direct influence on decision makers’ choice behaviour, but rather an influence on decision makers’ risk estimation that may be used for their future medical choices.

Research participants

We administered the choice architecture competency questionnaire to senior resident physicians. We identified physicians in their final year of accredited residency training from three hospitals within a tertiary academic health system. We chose these physicians as they were nearing clinical independence, functioned with a high degree of clinical independence given their seniority and were likely to engage in shared decision making. All eligible physicians were fluent in written English, ≥18 years old and enrolled in programmes recognised by the Accreditation Council for Graduate Medical Education. Physicians on leave during the study period were excluded. Specialties that did not require substantial face-to-face contact with patients were excluded. The included patient-facing specialties were anaesthesiology, dermatology, emergency medicine, family medicine and community health, internal medicine, neurology, obstetrics and gynaecology, physical medicine and rehabilitation, psychiatry, radiation oncology, general surgery, and surgical specialties including ophthalmology, orthopaedics, otolaryngology and urology. A database of all potentially eligible residents was compiled using the staff directories of the health system. All physicians in the database were invited via email to participate in the scenario-based questionnaire.

We recruited a subset of physicians who completed the competency questionnaire to participate in semistructured interviews. We did not offer participation to all originally approached physicians nor to all physicians who completed the competency questionnaire. Physicians were selected for these interviews sequentially based on their scores on the questionnaire and their clinical specialty. We sought to represent physicians with a range of competencies in choice architecture and from diverse medical specialties. These interviews explored physicians’ responses to the scenario-based questions, their views on how relevant, influential and ethical choice architecture is in medical practice, as well as the sources of their knowledge about choice architecture.

Choice architecture competency questionnaire for physicians

Using the web-based Qualtrics platform (Provo, Utah), we administered the eight-item validated, scenario-based questionnaire to physicians and assessed their competency in eight decision-making principles of choice architecture (table 1).20 Consenting physicians received the eight scenario-based questions in random order (online supplemental appendices 5 and 6). For each question, physicians reviewed both choice presentations (1 and 2) and indicated how a particular presentation would influence the decision maker’s selection of choice options (A or B) in comparison with the other presentation. In other words, physicians were asked to predict the relative effect of two choice presentations (1 or 2) on choice options (A or B). We included a neutral option (C) if physicians felt that the given choice presentations would not predictably influence the decision maker to select a particular choice option (A or B). We included two additional, unscored questions that had no clear direction of influence in fixed positions (ie, questions 1 and 6) to encourage the selection of the neutral option as an acceptable response (online supplemental appendix 5). These unscored items were not validated in the same way as our other items, but were based on existing decision science literature and presented conflicting influences. Therefore, they had no obvious correct answers based on the presented choice architecture. We also asked physicians to report their sociodemographic information, including their political party affiliation and political views. These questions are relevant because they are the only sociodemographic characteristics that have been consistently associated with views on the acceptability of nudging or the intentional use of choice architecture to influence behaviour.21 Physicians were compensated US$25 for completing the questionnaire.

Table 1

Decision-making principles of choice architecture included in the scenario-based questionnaire

We used descriptive statistics to examine physicians’ characteristics. We calculated the primary outcome of physicians’ competency in choice architecture as the number of correct answers out of the eight scenario-based questions. We conducted linear regressions to examine the association between physicians’ characteristics and choice architecture competency. All statistical analyses were performed in RStudio (V.1.1.456, RStudio, Boston, Massachusetts)22 using the R language for statistical computing (V.4.0.1, R Foundation, Vienna, Austria)23 and the tidyverse package (V.1.3.0).

Semistructured interviews

The interview focused on obtaining a deeper understanding of physicians’ rationales for their answers on the questionnaire and views on the training in and use of choice architecture in medicine. Two investigators (KY and SS) conducted and recorded the interviews individually with the participating physicians. We reviewed the physicians’ own answers to five of the scenario-based questions and asked them to explain the rationale for each answer. We limited the number of reviews to prevent fatigue and varied the items we explored with each physician so that multiple physicians reviewed all the items. We also informed physicians of the correct answer, provided the relevant supporting evidence from the literature, and gathered their responses to and acceptance of this new information. We then elicited physicians’ views on the applicability and ethicality of using specific choice architecture principles that may influence healthcare decision makers. Finally, we prompted physicians to describe any prior or ongoing training in choice architecture or related communication principles. Physicians were compensated US$50 for participating in the interview.

Audio-recorded interviews were transcribed verbatim by a professional transcription service. Four investigators (JLH, KY, AS, and SS) generated a preliminary codebook based on the interview content. We independently coded the interview transcripts in duplicate, reviewed the coding for discrepancies and reached an agreement on the application of codes in regular coding meetings. Throughout the qualitative analysis, we updated and refined the codebook and interview guide as necessary. Three investigators (JLH, KY, and SS) then independently reviewed the content of the codes in order to identify emergent themes and subsequently met as an analytic team to reach a consensus on the results. Interviews continued until we achieved thematic saturation after 15 interviews, at which point no new themes emerged during further content analysis.

Finally, we performed synthesised member checking among physicians to validate our qualitative results and limit the potential for researcher bias. Member checking, also known as respondent validation, enables the studied population to review, contribute to, and further corroborate or refine the qualitative research findings.24 We provided the interviewed physicians an opportunity to review the major themes identified from the interview transcripts using an emailed Qualtrics survey. We asked them to indicate whether the findings resonated with their own experience and explain any perceived inaccuracies.

Results

Assessment results

Of 160 eligible resident physicians, 93 (response rate (RR)=58.1%) completed the questionnaire. The mean age of physicians was 30.2 years (SD=2.4), and a majority identified themselves as male (n=57, 61.3%), Caucasian (n=56, 60.2%) and Democrat (n=61, 65.6%). The most represented medical specialties included internal medicine (36.6%), anaesthesia (16.1%), surgery and surgical subspecialties (11.8%), and emergency medicine (10.8%). These demographics are detailed in table 2 and representative of resident physicians at this tertiary academic health system.

Table 2

Characteristics of physicians (N=93)

Physicians correctly answered 4.85 correct out of 8 choice architecture competency items (SD=1.26; 95% CI 4.59 to 5.11) or 60.62% (SD=15.74; 95% CI 57.38 to 63.86; figure 1). Physicians demonstrated the highest competency in scenarios assessing habit formation, social norms and default effect. They demonstrated the lowest competency on scenarios assessing relative risk bias, anchoring effect and multiple alternatives bias. Linear regression identified no associations between physicians’ characteristics and choice architecture competency (all p>0.05; online supplemental appendix 7).

Figure 1

Proportion of physicians correctly predicting the influence of choice frames. *Total correct items out of 8.

Interview results

Of 28 invited resident physicians, 15 (RR=53.57%) completed the semistructured interview. This subset represented both high competency (score ≥6, n=8) and low competency (score <6, n=7) scorers. The median duration of the interview was 39 min (IQR=33–42 min, range=20–51 min). We invited 14 of these 15 interviewees to validate the results by member checking, as we could not locate a valid email address for the remaining interviewee. Of 14 invited physicians, 6 (RR=42.85%) responded, and all confirmed the validity of the themes that emerged from the interviews. The respondents identified no significant inaccuracies after reviewing the qualitative conclusions. Key themes are summarised in table 3.

Table 3

Themes from semistructured interviews of physicians

Choice architecture is highly relevant to healthcare

Physicians uniformly found the principles of choice architecture highly applicable to healthcare settings and the physician’s role in shared decision making. Physicians recognised the importance of their communication in shaping decision makers’ choices and health behaviours. For example, reflecting on the use of social norming, one physician noted:

[I use social norming] all the time… People ask me, ‘Are there people that don’t get an epidural for this procedure?’ And I would say, ‘Very few wouldn’t.’ Because that’s the truth… If I say that probably they’re more likely, they’ll be like, ‘Well, then, I better get one even though I don’t want you sticking a needle in my back…’

However, many physicians also felt that they did not always understand the likely influence of the choice presentation they used. One such physician remarked:

I’m in a field where we talk a lot about risk vs benefits. I don’t think I’ve ever had anyone formally discuss with me the importance of thinking about absolute vs relative risk and how that may bias a patient–or anchoring–which are all super relevant.

Physicians predict influences based on personal experiences

Physicians based their responses to scenario-based questions on prior professional experiences or their own anticipated responses. A minority of their responses were based on specific training or education. Reflecting on the influence of relative risk bias, one resident developed the following prediction:

… if you say 20% experience a complication vs 10%, perfect, I’m, like, okay. But if you say it’s twice as likely to cause a complication, then that sounds a lot more concerning to me as a patient

In this way, physicians may have been more successful at predicting the impact of certain decision-making principles of choice architecture, such as social norms and default effect, because the influence of these principles seemed to be ‘common sense’ or more ‘intuitive’. When their predictions based on personal reactions were incorrect, they found the choice architecture to be surprising, counterintuitive or ‘shocking’. Physicians struggled to understand the influence of certain choice architecture on patients and caregivers due to differences in perspective and education. For example, some physicians were unable to predict the patient’s response to a choice presentation because of prior professional training. Reflecting on the relative risk bias, one physician commented:

[Physicians] are well versed in [statistics] and we can go back and forth between the percentages and the ratios. If you know both you probably may not be thinking about how you’re presenting it to the patient. They don’t know both. You do. So, for you to say twice as much or 20% vs 10%, it doesn’t mean anything [to the physicians]. But it does to the patient.

Similarly, physicians scored poorly in predictions of anchoring bias. The rationale for their incorrect answers revealed that physicians were less susceptible to the specific anchoring scenario due to their medical knowledge and therefore were less likely to recognise its influence on patients:

When providing or estimating the risk of a genetic disease, if the [patient’s] oldest living relative is 90, compared to 50, there’s a huge difference and it may be totally unrelated to the genetic disease. So I don’t think that a number in and of itself, a random number, should have anything to do with what one assumes is a risk factor when estimating the likelihood of acquiring a genetic disease.

The apprenticeship model may lead to errors in understanding

Some physicians described limited training in choice architecture during medical school, while almost all physicians described learning about communication with patients and caregivers during their residency training. These communication skills, including the use of choice architecture, were developed by observing more senior or attending physicians through an apprenticeship model. One physician noted: “I have learned these [decision-making principles], but not in such an explicit way. It’s more just through experiencing how my attending[s] talk.” Another, at the end of 4 years of residency training, reflected that his or her communication of choice was shaped by the behaviours of attending physicians she witnessed early in her residency experience:

In our first month of [residency]…[we] have 2 attendings for 2 weeks each. You watch them consent patients in the morning and then throughout the day. So, a lot of the ways I present things are based on those [attending physicians].

Since this method of experiential learning is neither explicit nor exhaustive, many physicians felt as though they had to ‘learn on the fly’ or by ‘trial and error’. One physician illustrated:

I have learned almost none of these in actual, like formal settings… all of these [decision-making principles] I have learned on the fly. [As an example,] the compromise effect, we kind of talked about… giving a couple more options… helps [the patients and their surrogates] understand what is the truly desirable choice [to them].

The minority of physicians who reported receiving didactic training on choice architecture did so largely outside of their medical education. Examples included undergraduate or graduate coursework in economics, psychology, computer science and statistics, as well as independent research into the subject due to personal interest.

Ethical boundaries exist when using choice presentations

The majority of physicians felt that choice architecture was acceptable when it promoted the patient’s best interest, as determined by the physician:

Depending on how you’re presenting [choice options], you’re going to influence patients’ decisions. But that’s kind of your job in the role of an expert consultant, is to influence or recommend. I think most people when they are influencing peoples’ decisions are doing it from the place of trying to do what they think is best for the patient. I think that is always ethical…

A few physicians felt that the use of choice architecture could be ethically problematic as its use may infringe on the decision maker’s autonomy and introduce elements of deception or manipulation. One physician remarked: “From an ethical standpoint, you should do a limited amount of influence other than presenting information and allowing patients to make an informed decision.” Others recognised that the degree to which the choice architecture influenced the decision maker may inform its ethicality. For example: “I don’t think you should use [decision-making principles] to push someone to one [option] wholeheartedly…because I think that takes away the option or decision that the patient gets to have.”

In this way, physicians felt that the use of choice architecture must balance the promotion of patient autonomy and the need to guide decision makers towards a choice aligned with a patient’s best interests. Ultimately, most physicians identified training in the use of choice architecture as crucial to promoting its ethical use. One physician concluded: “I think it’s good to know what the literature actually shows because…even if you think you’re presenting it in an unbiased way…you might be biasing them towards one option or another.”

Discussion

Clinicians frequently serve as choice architects, as the use of choice architecture is often unavoidable when presenting choices to decision makers.25 Our interviewed physicians generally agreed that the decision-making principles of choice architecture are highly applicable to the healthcare setting. The perceived relevancy of choice architecture to physicians’ professional role as clinical advisors may indeed facilitate efforts to improve their use of choice architecture. Furthermore, physicians felt that they could not use choice architecture ethically without sufficient knowledge of and competency in applying its principles. Our findings reveal that while physicians are able to predict the influence of certain decision-making principles, many physicians lack the ability to predict the influence of most choice architecture on healthcare decision makers.

Our current research and healthcare environment appropriately emphasises the importance of shared decision making in order to promote goal-concordant care.26–29 Yet physicians are not sufficiently competent in choice architecture to fulfil their role as expert guides for these decisions. Participating physicians reported little to no explicit discussion of choice architecture in their clinical training. The use of an apprenticeship model to teach communication skills and principles inherently limits the improvement of these skills if the individuals who model the behaviours also lack adequate knowledge and competency in choice architecture. Our study did not test the competency of more senior clinicians, who would have had more experience directly observing the responses of patients and surrogates. However, the predictable influences of choice architecture have only been recently described.5 Moreover, healthcare leaders and educators have only recently recognised the relevance of choice architecture to healthcare and integrated these concepts into system improvements and medical training.30 31

Our findings also suggest that some physicians believe the ability to communicate choice options to decision makers is a ‘soft skill’. That is, this skill is one that does not require explicit, didactic instruction but is instead intuitive or developed through experiential learning and shared socialisation. Our results highlight the shortcomings of this approach. Physicians’ current training and clinical education may, in fact, make it more difficult for them to predict how different choice architecture impacts decisions made by laypeople. An individual’s susceptibility to certain heuristics and biases is highly dependent on his or her personal experiences and expertise.32 Physicians have more comprehensive knowledge about the medical choices being considered, but they may not have corresponding knowledge of how to communicate those choice options to laypeople. Consequently, they will be less likely to recognise the effect of choice architecture on patients without such training.33

Some physicians’ lack of sufficient competency in choice architecture may lead to ethical challenges in shared decision making. Clinicians who do not understand choice architecture cannot predict the direction or degree of its impact on an individual decision maker. Nevertheless, clinicians are responsible for the influence they have on decision makers in their role as choice architect. This influence may be unintended, but decision makers are still influenced towards or away from particular choices as a result. Only by understanding how decision-making principles influence decision makers can clinicians decide whether that influence, or the direction of that influence, is justifiable.34 Therefore, future research is needed to identify the types of choice architecture that clinicians use most frequently in clinical practice, explore patients’ and surrogates’ views on the appropriate boundaries of clinicians’ use of choice architecture in shared decision making, and develop effective educational interventions to improve clinicians’ competency in choice architecture. Further, this lays the foundation for future work exploring whether clinicians’ choice architecture competencies and similar communication skills underlie the well-described variation in the care patients receive, especially when patients and surrogates face preference-sensitive decisions.35 36

Limitations

First, this study used hypothetical scenarios in the questionnaire. Although we did validate scenarios among laypeople on MTurk, responses to hypothetical scenarios may differ from the medical decisions that decision makers would actually make in the described scenarios. Physicians’ abilities to predict decision makers’ behaviours may also be different when completing a questionnaire as compared with the clinical setting. However, our goal was to assess knowledge, for which this format remains appropriate. Self-report statements may not reflect actual behaviours, which may have affected the validity of physicians’ responses during the interviews. Second, we used a single scenario to assess competency in each choice architecture principle. Other scenarios assessing the same choice architecture principle may have yielded different responses by physicians, although the inclusion of multiple scenarios for each principle would have made the questionnaire more burdensome to participants. Third, there are numerous cognitive heuristics and biases not included in the questionnaire.12 We selected decision-making principles that we hypothesised would influence preference-sensitive medical decisions, for which there was published literature supporting their direction of influence and for which the direction of influence was empirically validated by our MTurk sample. Fourth, participants may have experienced fatigue while completing either the questionnaire or the interview. However, we minimised the duration of both to reduce participant burden and compensated all participants for their time. Fifth, our sample size was sufficient for describing competency but did not allow us to explore variation in the data based on key physician-level characteristics, which will be an important future step in intervention development and implementation. Sixth, our study’s recruitment of resident physicians from a single, large academic healthcare may limit the generalisability of our findings to physician populations at other institutions, those with greater clinical experience and non-physician clinicians. However, this limitation is somewhat mitigated as the included residents have been educated in varied training environments within a single institution because the residency programmes at the three hospitals have little to no shared educational programming. Clinicians who trained longer ago are also likely to have less exposure to choice architecture as a concept. Finally, the low response rate may reflect selection bias, as those with less interest in the topic may have elected not to participate in both the questionnaire and the subsequent interviews. However, this would lead to overestimates of both physicians’ choice architecture competency and their recognition of the relevance of these principles to clinical practice. Therefore, this bias would emphasise the significance of our finding that clinicians may benefit from enhanced awareness of and training in the use of choice architecture.

Conclusions

Clinicians assume the role of the choice architect when presenting options to patients and other healthcare decision makers, whether or not they realise it. Clinicians must present choices and information as they engage in shared decision making, a process that is intended to promote care that is aligned with patients’ values and preferences. However, our results suggest that many clinicians may have inadequate competency in choice architecture. Consequently, the uninformed use of choice architecture by clinicians may very well influence patients and family members in ways that the clinicians did not anticipate or intend. In light of our findings, future research should examine the impact of clinicians’ choice architecture on patients’ decisions and outcomes, as well as the development and testing of interventions to improve choice architecture competency among clinicians.

References

Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

Footnotes

  • Twitter @JHartMD, @KuldeepNYadav

  • Contributors All authors made substantial contributions to the conception or design of the work, or the acquisition, analysis or interpretation of data. All authors were involved in drafting the work or revising and reviewing it critically for important intellectual content. All authors gave final approval of the version published. All authors agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

  • Funding This study was funded by the National Heart, Lung, and Blood Institute (K23HL132065) and Leonard Davis Institute - Center for Health Incentives and Behavioral Economics Penn Roybal Center, National Institute on Aging (P30AG034546).

  • Disclaimer The funding sources played no role in designing the study, interpreting the data, or writing and publishing the manuscript.

  • Competing interests None declared.

  • Patient consent for publication Not required.

  • Ethics approval The Institutional Review Board of the University of Pennsylvania approved this study.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data availability statement Protocols and analytic plans are available upon request. The full instruments were included in the supplemental materials. Data may be shared upon reasonable request in alignment with institutional policies.

  • Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

Linked Articles