Article Text

Download PDFPDF

Premature closure? Not so fast
Free
  1. Gurpreet Dhaliwal
  1. Correspondence to Dr Gurpreet Dhaliwal, Department of Medicine, University of California San Francisco, Medical Service, San Francisco VA Medical Center, 4150 Clement Street, San Francisco CA 94121, USA; gurpreet.dhaliwal{at}ucsf.edu

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Dual process theory (DPT) and the intertwined concepts of heuristics and biases, popularised by Kahneman's book Thinking Fast and Slow, are widely discussed models for analysing decision-making processes inside and outside medicine.1 The basic premise of DPT is that the brain has a fast, intuitive, but occasionally error-prone system (system 1) and a slower, energy-intensive but more accurate analytical system (system 2). Inexorably tied up with the DPT model is the idea that the errors made in system 1 are a result of shortcuts (heuristics) and predispositions (biases) and the hope that if we spent more time in system 2, cognitive errors could be mitigated.

Insights from this model have driven quality improvement and medical education efforts. Learning about how our brain succeeds and fails is interesting, humbling and motivating—but is it effective? My instinct has always been that it is, but as I have tried to answer key questions that my own DPT-based teaching inevitably brings up, I have become less certain.

Can I show accurate examples of system 1 or system 2 thinking?

One of the ways to bring the model to life is to provide examples, but it is difficult to find examples of pure system 1 or system 2 thinking in clinical medicine. Like others, I use the herpes zoster rash as a system 1 prototype, but then explain how a good clinician always asks a few questions and carefully inspects the rash before declaring their conclusion. The stock system 2 example is a mathematical analysis of pretest and post-test probability of pulmonary embolus, but I eventually disclose that emotional dimensions like regret also influence the decision to order a CT angiogram. Analysis of every case shows that some aspects of the clinical reasoning process (eg, hypothesis generation) are intuitive, while other phases (eg, hypothesis verification) are more analytical. One quickly finds himself/herself telling students that cognition always exists on a continuum between pure intuition and pure analysis, but then wonders, what did that accomplish?

Can I teach students to go fast or go slow?

The knowledge of the continuum would be particularly useful if I could guide students on if, when and how to pick a spot on the intuitive-analytical continuum when they confront a case. But I cannot. The brain's initial approach to a problem unfolds over the span of milliseconds, and in that time frame, there is no option to consciously recruit system 1 or system 2. Students—like doctors—do not choose to think fast or slow. Rather, it is the nature of task—easy/familiar or hard/unfamiliar—that drives the problem solving strategy.2 (There are multiple vignette studies of medical trainees that show that fast3–5 or slow6–8 diagnosis is accurate.) A teacher or curriculum can influence what is familiar to a student, but not how fast or slow they think about it.

Can I teach students to debias themselves?

If I cannot calibrate how fast or slow they think, perhaps I can teach students a hard stop so that they can catch themselves making a cognitive mistake.9 This too turns out to be difficult because the brain does not have an early error detection system. In her book Being Wrong: Adventures in the Margins of Error Schulz points out that when you are actively in the process of making a mistake, being wrong feels exactly the same as being right.10 It feels like the perfect strategy right up until the moment that it is not.11 When Sherbino et al taught cognitive forcing strategies to mitigate diagnostic error in trainees, it did not work.12 Part of the challenge is that debiasing, just like medical diagnosis of a patient, involves the fallible process of making a cognitive diagnosis of oneself. Prospectively categorising our impending heuristic is difficult, prone to error and may lead to the application of an incorrect or harmful corrective strategy.13

Hindsight bias

Although the prospects of DPT changing the way the human mind works are limited, I have always appreciated that the model provides clinicians with a useful shared vocabulary to analyse our previous mistakes. A study in this issue point out how even this may not be true.14 In a web-based study of 37 physicians, Zwaan et al demonstrated that when clinicians reviewed case vignettes, they were more likely to ‘see’ cognitive errors when they learned that the working diagnosis was incorrect than when it was correct (and regardless, clinicians did not agree on which cognitive errors were present). It is human nature to judge the quality of the decision-making process by the result rather than the logic—we cannot help it. That is the hindsight bias (irony noted).

In sports, the exact same decision-making process can conjure different adjectives depending on the outcome. If the coach or athlete undertakes a high-risk play and the team wins, he or she is hailed as confident, strategic, experienced or gutsy. If the play results in a loss, the same decision is called short-sighted, foolish, overconfident or reckless.

When a physician makes a challenging diagnosis with just a few pieces of information, she is called a brilliant diagnostician. If her diagnosis is wrong, it is called premature closure. If I missed a brain tumour in a patient with a headache last week, I may have a lower threshold for ordering a head CT scan in a similar patient today. If today's CT detects a brain tumour, it is called learning from experience. If the CT scan is normal, it is called the availability heuristic. If a patient has flank pain and haematuria and you do not revise your diagnosis of nephrolithiasis despite a negative abdominal CT, that is called anchoring bias—until the patient comes back the next day with his passed stone in hand.

This is where the focus on heuristics and biases comes up particularly short as a target for quality improvement. We only use the vocabulary of heuristics and biases in cases with bad outcomes which makes us forget that they work most of the time. McLaughlin et al observed that we do not know how many heuristics and biases would be found on detailed inspection of charts with patients who are correctly diagnosed.15 Zwaan et al's study shines a light on this question: an average of 1.75 cognitive biases was appreciated when the working diagnosis proved to be correct. Many studies show that heuristics can lead to better decisions than analytical models.16 Heuristics are not a ‘bug’ in our neural software—they are an essential feature of the programme.

Knowledge is king

Knowing how the brain works and how it fails should lead to improved cognitive performance—but to date, it has not. There is no doubt that cognitive errors contribute to medical errors, but the intuitive appeal of the DPT model has led to a situation where ‘a significant portion of medical education literature has been dedicated to identifying the sources of cognitive errors, rather than identifying the best strategies for learning the prerequisite knowledge to avoid errors’.17

I think it is still important to know and teach about heuristics and biases.18 And I do so for one reason: learning about our flawed cognition keeps us humble. But that insight should motivate us to pursue strategies that are the most promising in improving performance.

Although early retrospective studies of diagnostic error emphasised faulty cognition,19 recent experimental studies illustrate that knowledge remains the key determinant of diagnostic accuracy.20 Teachers cannot shape the thought process, but they can shape the training environment and influence the way knowledge is constructed by learners. Strategic sequencing of problems (spaced practice), compare and contrast reading strategies, expanded clinical experience and feedback on patient outcomes21 ,22 are more likely to build a reliable doctor than indepth study of DPT.

If you have not heard about myasthenia gravis, you cannot cognitively debias your way into that diagnosis. You can spend all day in system 2 and collect more and more information, but if you do not have a well-developed illness script that contains atypical manifestations of heart failure, you will never recognise it. In the realm of expert performance, knowledge is king.23

Conclusion

It is hard for me to believe I cannot train my brain. Armed with the insights of the DPT model, I know I have caught myself in moments where I have been guilty of confirmation bias in my testing or affective bias against a certain patient group. But perhaps that is the best we can hope for from the model—a few moments of insight in the 1000s of decisions we make each day. Even Kahneman, Nobel laureate and the founder of the heuristics and biases field, says at the end of his 400-page book Thinking Fast and Slow that after 30 years of study, he is no better at avoiding these biases than he was when he started. He says he may now recognise a few situations where he is at risk of making cognitive errors, but like all of us, he is still better at seeing them in other people than in himself.

References

View Abstract

Footnotes

  • Competing interests None declared.

  • Provenance and peer review Commissioned; internally peer reviewed.

Linked Articles