Article Text

Download PDFPDF

Unwanted patients and unwanted diagnostic errors
Free
  1. Donald A Redelmeier1,2,3,4,5,6,
  2. Edward E Etchells1,4,6
  1. 1Department of Medicine, University of Toronto, Toronto, Ontario, Canada
  2. 2Evaluative Clinical Sciences Platform, Sunnybrook Research Institute, Toronto, Ontario, Canada
  3. 3Institute for Clinical Evaluative Sciences, Toronto, Ontario, Canada
  4. 4Division of General Internal Medicine, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada
  5. 5Center for Leading Injury Prevention Practice Education & Research, Toronto, Ontario, Canada
  6. 6Centre for Quality Improvement and Patient Safety, University of Toronto, Toronto, Ontario, Canada
  1. Correspondence to Dr Donald A Redelmeier, Division of General Internal Medicine, Sunnybrook Health Sciences Centre, G-151, 2075 Bayview Ave, Toronto, ON, Canada M4N 3M5; dar{at}ices.on.ca

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Real people have real emotions that motivate their thinking. For example, the hopes of having a child can lead women with infertility to undergo courses of intense hormonal treatments and the fear of dying can lead men with prostate cancer to undergo surgical castration.1 Much of the attention towards advanced directives and discussions about goals of care are intended to document and legitimize a patient's emotions related to death and dying. Indeed, guidelines for physician-aided-dying suggest that a patient's emotions are sometimes more important than life itself.2 In contrast, the emotions of a physician are usually considered as unwanted intrusions into medical decision-making that have no legitimate relevance.

Psychiatrists use the term ‘countertransference’ to denote a psychotherapist's emotions towards a patient. The basic concept is that a physician's own feelings may become entangled in the doctor–patient relationship and lead to missed diagnoses and ineffective care. Sigmund Freud first popularised the concept about a century ago emphasising how a physician's unconscious thoughts might include latent hostility or erotic feelings towards a patient.3 Different authorities over subsequent decades have also confirmed that countertransference is an undesirable but unavoidable component of medical diagnosis and treatment. The importance of these potentially disruptive physician emotions, however, is hard to judge in the absence of objective data.

Schmidt et al and Mamede et al present two articles testing whether disruptive patient behaviours might provoke unhelpful physician emotions and thereby decrease a physician's diagnostic accuracy.4 ,5 The studies involve clinical scenarios eliciting diagnostic judgements. Each scenario appeared in either a ‘negative’ or a ‘neutral’ version depending on changing a few fragments of text. The negative version described the patient with unpleasant features such as “He is angry about the long waiting time and starts speaking harshly …”. The neutral version described the same patient with innocuous features such as “He comments on the long waiting time but says he is glad …”. The two versions were otherwise similar and randomly assigned to physicians.

The findings show a significant decrease in physician diagnostic accuracy when combined across scenarios and scaled so that ‘0.00’ denotes a faulty diagnoses and ‘1.00’ denotes an accurate diagnoses. Overall, the first experiment indicated a 0.10-point absolute decrease in diagnostic accuracy for negative patients compared with neutral patients (0.54 vs 0.64, p=0.017). The second experiment also indicated a similar absolute decrease in diagnostic accuracy for negative patients compared with neutral patients (0.41 vs 0.51, p=0.009). For both experiments, the mean diagnostic times averaged about 2 min. Presumably, the high rates of diagnostic error in both experiments reflect the complexity of the scenarios or other methodological confounders.

These results agree with many past analyses indicating that unpleasant people tend to have unfavourable outcomes. In one classic study, individual defendants (n=67) convicted of violent crimes were assessed on a numerical scale for physical attractiveness and then subsequently followed for the ultimate judicial decision around incarceration.6 The main findings indicated a near doubling of the odds of being sent to jail for defendants with low attractiveness compared with defendants with high attractiveness (77% vs 46%, p=0.014), equal to an absolute number-needed-to-treat of about 3. This is a logical reason, therefore, why defendants tend to dress properly and behave politely when undergoing a judicial evaluation.

The data of Schmidt et al have some limitations due to biases that are inherent to scenario experiments. Surrogate endpoint bias means that mistakes on survey responses might not translate to real-world adverse events. Confounding bias means that a patient's difficult personality may be entwined with the likelihood of diseases that inform a differential diagnosis (eg, a patient with an unexplained tremor, may be more likely to have alcoholism than thyrotoxicosis if also behaving belligerently). Selection bias means that a few contrived scenarios may not reflect other situations where unpleasant behaviours could enhance diagnostic accuracy (eg, a frontal lobe tumour manifested by an irritable personality). These and other limitations need to be considered when interpreting current results or planning future research.

Regardless of the research limitations, physicians need safeguards against the negative emotions that may undermine diagnostic accuracy. One strategy is simple self-reflection by the physician.7 For example, patients with mood disorders sometimes elicit urges in the physician to depart the clinical encounter, and recognising this feeling can be a signal to consider a diagnosis of depression. Similarly, patients with negative behaviours sometimes elicit urges in the physician to interrupt the dialogue, and recognising this feeling can be a signal to consider a missed diagnosis. Of course, such prompts are imperfect and the effectiveness of self-reflection is hard to prove. The strongest argument for self-reflection is that patients with negative behaviours are not immune to serious diseases.

A different corrective strategy is to harness metacognitive debiasing skills. One approach, for example, is for physicians to reframe the situation as a counterfactual by imagining the patient as easy instead of difficult. Doing so demands mental discipline, of course, but might potentially enhance deliberation and decrease the time wasted by distracting emotions. In a culinary experiment, for example, switching a recipe from a difficult-to-read font (Mistral, 12-point) into an easy-to-read font (Arial, 12-point) significantly increased a participant's willingness to cook a new Japanese dish.8 Unfortunately, the benefits of metacognitive training are uncertain because changing the attractiveness of a patient's personality is not as easy as switching a software font.

An individual physician might not always be able to separate emotions from the diagnostic process; however, a self-aware physician might be able to invoke teamwork and consultation to minimise the potential for diagnostic error. For example, the emotionally aware physician might speak with a trusted colleague by asking “I have a patient with unexplained symptoms and I want to make sure I am not missing anything serious. The patient put me in a bad mood so my thinking might be off … Can you help me?” The main challenges of such a team approach is the necessary candour required of the first physician, the necessary understanding required of the second physician and the necessary dispassion in them both to avoid groupthink.

Another potential strategy is to consider more structured diagnostic checklists or computer-assisted diagnoses when evaluating a difficult patient. Structured checklists can guide clinicians to identify relevant information in an otherwise difficult presentation. Computerised support can generate extensive differential diagnoses that might be otherwise neglected due to fallible human emotions. Both help to restore order when a physician's thinking might be disrupted by negative emotions. Indeed, many patients seem to look quite different when considered from the perspective of an electronic medical record rather than when seen in real life. Of course, these strategies are in their infancy and the current technology remains largely unproven.9

A traditional safeguard against diagnostic error is more diligent follow-up. Doing so can help physicians confirm correct diagnoses as well as intercept faulty diagnoses.10 Yet, failures of follow-up may be frequent in difficult patients since neither the doctor nor the patient feels enthused about seeing each other again. In one randomised trial, for example, adults who described themselves as unemployed rather than wealthy were 50% less likely to receive an appointment when calling a physician's office.11 This means that disadvantaged people can encounter more of the gaps that both precipitate diagnostic errors and preclude subsequent error correction.12 An emotionally aware professional, therefore, might take the simple strategy of asking difficult patients to return.

Patients themselves could also take some steps to minimise the possible impact of negative emotions on physician diagnostic accuracy. A medical encounter often provokes anxiety and an emotionally aware patient could channel these emotions towards the positive by introducing themselves with a nicety such as “Thank you for seeing me. I am frightened by what I am experiencing and that is why I am here looking for something that might help”. Real people, however, cannot always control their temper when suffering or in pain. As the authors of this editorial, therefore, we believe these logically coherent patient strategies are unlikely to be popular or effective despite the implication of Schmidt et al and Mamede et al that good etiquette can foster better diagnostic accuracy.

Schmidt et al and Mamede et al suggest that negative emotions may be provoked by patients, can lead to decreased physician diagnostic accuracy and might merit attention for improved medical care. They do not test corrective procedures, thereby highlighting opportunities for future research. The data also caution that potential corrective efforts may not be fully successful (given that emotion is what grounds real human thinking) and may not always be cost-effective (given the modest effect sizes reported). For the present, we suggest ongoing consideration of strategies that might lessen the detrimental impact of negative emotions yet still preserve the positive emotions that inspire physicians to diagnose most patients accurately.

References

View Abstract

Footnotes

  • Correction notice This article has been corrected since it was published Online First. The reference list has been updated.

  • Contributors DAR wrote the first draft. Both authors contributed to manuscript revisions and the final decision for submission.

  • Funding This article was supported by a Canada Research Chair in Medical Decision Sciences and the Canadian Institutes of Health Research.

  • Competing interests None declared.

  • Provenance and peer review Commissioned; internally peer reviewed.

Linked Articles