Article Text

Download PDFPDF

Approach to making the availability heuristic less available
Free
  1. Donald A Redelmeier1,
  2. Kelvin Ng2
  1. 1Department of Evaluative Clinical Sciences, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada
  2. 2Department of Medicine, University of Toronto, Toronto, Ontario, Canada
  1. Correspondence to Dr Donald A Redelmeier, Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada; dar{at}ices.on.ca

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Errors in judgement are often traceable to pitfalls of human reasoning. One pitfall is the availability heuristic, defined as a tendency to judge the likelihood of a condition by the ease at which examples spring to mind. This intuition is often a great approximation but can be sometimes mistaken because of fallible memories. People, for example, may mistakenly believe drowning causes fewer deaths than fires in the USA (actual deaths in 2017: drowning=3709 vs fires=2812)1 because they cannot easily recall many news stories about drowning. Calm water is boring to imagine whereas bright flames are dramatic images vividly recalled and frequently popularised. In turn, people can underestimate the risks lurking in lakes or rivers and neglect basic safety strategies. This may be an example where the availability heuristic could cause a fatal mistake.

Diagnostic errors can also stem from the availability heuristic and contribute to serious mistakes in patient care. One pregnant patient diagnosed with Zika virus infection, for example, may provoke wide public attention, lead to excessive viral testing of pregnant women and result in underestimating more likely contributors to maternal morbidity including domestic violence, mental illness and traffic crashes.2 Of course, a formal analysis of diagnostic possibilities for every case would demand substantial effort and, itself, does not guarantee a correct diagnosis. In addition, the availability heuristic often leads to the right diagnosis by providing a quick and easy guess.3 This means the availability heuristic will have enduring appeal in medical care for the foreseeable future.

In this issue of the journal, Mamede et al show how to potentially diminish the availability heuristic in a physician’s diagnostic judgements for patients presenting with diarrhoea or jaundice.4 Medical residents (n=91) examined difficult cases (total=25 written scenarios) and provided diagnostic judgements (ultimately scored as ‘0’ to denote an incorrect diagnosis and ‘1’ to denote a correct diagnosis). The core educational intervention was randomly assigned to half of the participants and involved reflective learning with domain-specific knowledge. The main results showed the intervention improved diagnostic accuracy for vulnerable cases (0.24 vs 0.40, p=0.004). Mamede et al conclude the intervention reduced diagnostic errors.

A large strength of the study was in designing and documenting a strategy against the availability heuristic. In essence, this educational approach involved a clinical exercise to identify features that distinguished different diseases that have otherwise similar presentations. This included creating a table to organise features that supported the diagnosis, that weakened the diagnosis, or that should be present with the diagnosis. An additional clinical exercise involved reviewing a similar table already completed by expert internists to further define the differentiating features of a specific disease. This was an effortful educational task that involved knowledge development yet seems to be more effective than abstract debiasing strategies.5

This is not the first trial to find some efficacy against pitfalls of human reasoning in diagnosing patients.6 In particular, a recent review of 14 studies found that structured guided reflection was often a consistently successful strategy.7 The effectiveness of guided reflection can be further strengthened if supplemented by deliberative stepwise evaluation of alternative hypotheses and by prompt feedback with contrasting examples (as included in the strategy by Mamede and colleagues). Conversely, earlier reviews have not found encouraging results from other strategies including generic prompts, debiasing workshops or computer support.8–10 Together, this body of literature offers some optimism about how to educate clinicians to guard against pitfalls of human reasoning.

The study by Mamede et al provides another important observation relevant for future science. In particular, the study highlights several logistic challenges given that research participants had to attend more than one session in separate hospitals at different times. These are substantial operational challenges that can be particularly daunting because psychology trials rarely have lavish budgets.11 Indeed, this study seems to have been self-funded by institutional resources and must have required substantial initiative by the research team. Together, these nuances highlight how the modest sample size and completion rate (eligible=232, completed=91, ratio=39%) is a testament to the importance of local stakeholder partnerships.

The study has several other strengths that deserve recognition. The experimental design based on prospective randomisation is a welcomed approach to test an educational intervention. The recruitment of experienced clinicians rather than internet volunteers underscored that psychology science extends to practising physicians. The written scenarios were well crafted, the clinical problems were legitimate and the participants seemed to take the task seriously. The addition of unrelated scenarios as ‘fillers’ was an efficient and sensible method for providing some degree of blinding, although at the expense of more respondent burden. The contribution also builds on a larger stream of research by this group examining how the availability heuristic contributes to diagnostic mistakes.12

An important limitation of this study is whether the availability heuristic was solved (bias is extinguished) or just supplanted (bias is inconsequential). By their nature, pitfalls of reasoning mostly arise in tough rather than easy problems; for example, no amount of framing bias would reverse an obvious personal preference for watching baseball rather than ballet. In medicine, effective clinical education can hopefully transform a tougher diagnosis into and an easier diagnosis. Together, this means formal education often lessens cognitive pitfalls by providing a clear path to the correct answer (assuming the training is relevant). As a consequence, the term ‘immunised’ is somewhat misleading and leaves open the debate of whether the availability bias was merely circumvented.

A common concern for randomised trials relates to whether the findings will replicate in everyday practice. For most studies, this equates to wondering whether those selected for the trial might be more cooperative, less problematic or otherwise different than those encountered in everyday practice. In addition, the brief duration of follow-up means the apparent benefit from education might not be sustained. This trial, however, raises more issues of external generalisability since we do not know whether the apparent improvement in diagnostic accuracy for patients with diarrhoea or jaundice would extend to patients with chest pain, dyspnoea or other problems. Moreover, the nature of the intervention is such that immunising against the availability bias requires education specific to each condition.

A deeper philosophical question persists on whether immunising clinicians against the availability heuristic is a worthwhile way to reduce diagnostic errors. On the one hand, effortful deliberative strategies such as analysing literature are not always a practical approach to correcting diagnostic fallibility.13 On the other hand, this immunisation approach would require substantial time and effort to apply to a range of common problems. As with vaccines, clinicians might also need ‘boosters’ from time to time.14 Perhaps the essential finding by Mamede et al is that the error rate stayed substantial even after the educational intervention. This means we still need to stay humble about diagnosing patients because the pitfalls in human reasoning have no one simple solution in medical care.

References

Footnotes

  • Contributors All authors contributed to the design, analysis and interpretation of the study. The lead author (DAR) had full access to all data and takes responsibility for the accuracy of the analysis. All authors were involved with drafting the manuscript and critical revisions.

  • Funding This project was supported by a Canada Research Chair in Medical Decision Sciences, and the Canadian Institutes of Health Research.

  • Competing interests None declared.

  • Patient consent for publication Not required.

  • Provenance and peer review Commissioned; internally peer reviewed.

Linked Articles