Article Text

Cognitive interventions to reduce diagnostic error: a narrative review
  1. Mark L Graber1,2,3,
  2. Stephanie Kissam3,
  3. Velma L Payne4,5,
  4. Ashley N D Meyer6,7,
  5. Asta Sorensen3,
  6. Nancy Lenfestey3,
  7. Elizabeth Tant3,
  8. Kerm Henriksen8,
  9. Kenneth LaBresh3,
  10. Hardeep Singh6,7
  1. 1VA Medical Center, Northport, New York, USA
  2. 2Department of Medicine, SUNY Stony Brook, New York, USA
  3. 3RTI International, Research Triangle Park, North Carolina, USA
  4. 4School of Biomedical Informatics, University of Texas Health Science Center in Houston, Houson, Texas, USA
  5. 5National Center for Cognitive Informatics and Decision Making in Healthcare, University of Texas Health Science Center in Houston, Houston, Texas, USA
  6. 6Houston VA HSR&D Center of Excellence, and the Center of Inquiry to Improve Outpatient Safety Through Effective Electronic Communication, Michael E. DeBakey Veterans Affairs Medical Center, Houston, Texas, USA
  7. 7Section of Health Services Research, Department of Medicine, Baylor College of Medicine, Houston, Texas, USA
  8. 8Center for Quality Improvement and Patient Safety, Agency for Healthcare Research and Quality, Rockville, Maryland, USA
  1. Correspondence to Dr Mark L Graber, RTI International, c\o 1 Breezy Hollow, St James, NY 11780, USA; mgraber{at}rti.org

Abstract

Background Errors in clinical reasoning occur in most cases in which the diagnosis is missed, delayed or wrong. The goal of this review was to identify interventions that might reduce the likelihood of these cognitive errors.

Design We searched PubMed and other medical and non-medical databases and identified additional literature through references from the initial data set and suggestions from subject matter experts. Articles were included if they either suggested a possible intervention or formally evaluated an intervention and excluded if they focused solely on improving diagnostic tests or provider satisfaction.

Results We identified 141 articles for full review, 42 reporting tested interventions to reduce the likelihood of cognitive errors, 100 containing suggestions, and one article with both suggested and tested interventions. Articles were classified into three categories: (1) Interventions to improve knowledge and experience, such as simulation-based training, improved feedback and education focused on a single disease; (2) Interventions to improve clinical reasoning and decision-making skills, such as reflective practice and active metacognitive review; and (3) Interventions that provide cognitive ‘help’ that included use of electronic records and integrated decision support, informaticians and facilitating access to information, second opinions and specialists.

Conclusions We identified a wide range of possible approaches to reduce cognitive errors in diagnosis. Not all the suggestions have been tested, and of those that have, the evaluations typically involved trainees in artificial settings, making it difficult to extrapolate the results to actual practice. Future progress in this area will require methodological refinements in outcome evaluation and rigorously evaluating interventions already suggested, many of which are well conceptualised and widely endorsed.

  • Patient safety
  • human error
  • medical error
  • measurement/epidemiology
  • decision support
  • computerised
  • decision-making
  • information technology
  • trigger tools
  • primary care
  • diagnostic errors
  • health services research
  • healthcare quality improvement
  • evidence-based medicine
  • quality improvement
  • audit and feedback
  • cognitive biases

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Although the rate of diagnostic error in practice is unknown, experts estimate it to be in the range of 10%–15%.1 Diagnostic errors are of great concern in all specialties and those characterised by high levels of stress, workload and distractions are particularly vulnerable. Errors are more likely when the level of uncertainty is high, if clinicians are unfamiliar with the patient, and when there are atypical or non-specific presentations of a common disease or ‘distracting’ comorbid conditions.2

Diagnostic errors reflect the complex interplay of system-related and cognitive factors, typically with multiple root causes identifiable in a single case.3–6 Cognitive errors can be found in the majority of cases.4 ,7 Given the dominant role that cognitive shortcomings play in contributing to diagnostic error, it is appropriate to begin considering what could be done to help minimise the likelihood of these errors. We therefore conducted an analytic review of the literature to identify interventions to reduce the likelihood of cognitive errors or error-related harm in healthcare. Interventions relating to system-related factors were discussed in a companion publication.8

Methods

Our search strategy has been previously described.8 Briefly, we sought articles, books and conference presentations relating to the prevention, reduction or mitigation of diagnostic errors in PubMed and several other medical and non-medical databases. We pursued references from these sources and asked authorities in the field of applied cognition and decision-making to recommend additional readings. Articles and books were included in this analysis if they contained results from an intervention trial or suggested an intervention to reduce cognitive-related diagnostic error. Publications that focused on development or refinement of specific diagnostic tests or technologies, or solely on the aetiology or epidemiology of error, or dealt primarily with provider satisfaction or preferences were excluded.

A full-text review using an approach described by Gordon and Findley9 was performed on the 42 empirical studies that tested an intervention. Nineteen quality-based criteria were independently extracted from each article using a data extraction form (online appendix A). Items answered with ‘yes’ or ‘no’ included literature review described, clear objectives reported, study design reported, appropriate design to address objectives, control group used, subjects randomised, blinding used, intervention clearly described, resources described, outcomes match objectives, statistical tests used, statistical tests appropriate, data collection replicable, study replication possible and limitations discussed. Additional items assessed were the study design, subject characteristics and number of subjects. Based on these items, we assigned an ‘Outcomes Rating’ and ‘Strength of Conclusions’ rating to each article (detailed instruments in online appendix B). The Outcomes Rating was based on Kirkpatrick's hierarchy9 ,10 that we slightly modified for use in assessing diagnostic errors. This hierarchy demonstrates the level of impact of each intervention on diagnostic errors (eg, Level 2b refers to an intervention in which an acquisition of concepts might impact diagnostic error, whereas Level 4b refers to an intervention that directly reduces diagnostic error). The Strength of Conclusions of each study was rated on a numerical scale (1–5) in accordance with Best Evidence in Medical Education guidelines.9 ,11 This rating is not an assessment of the overall methodological quality, but is a measure of how well the conclusions made are supported by the data presented.

Two reviewers with expertise in cognitive psychology (ANDM and VLP) assessed each of the intervention studies independently. We assessed agreement between the reviewers for the Outcomes Rating and the Strength of Conclusions with Cohen's κ statistic. Differences were resolved by discussion between the two reviewers and in cases of disagreement, another investigator (SK) reviewed and rated the article. In these cases, we used consensus among these three reviewers to determine the final ratings.

Based on a prior classification scheme,1 all articles were assigned to one of three natural categories: (1) Interventions that increase medical knowledge and experience; (2) Interventions that improve clinical reasoning; and (3) Interventions that involve getting help. Articles were further subdivided into more specific types of interventions (such as ‘focused training on specific content areas’, ‘develop simulation exercise to expose clinicians to a greater number and variety of cases presentations’, etc.) to facilitate the synthesis of the findings (tables 1–4).

Table 1

Interventions to increase medical knowledge and experience

Table 2

Interventions to improve intuitive (system 1) and deliberate (system 2) processes in decision-making

Table 3

Interventions: help from other people

Table 4

Interventions: help from decision support

Results

We identified 141 sources (articles, books and conference papers) for full review. Of these, 42 sources (tables 1–4) reported empirical studies of an intervention to reduce cognitive-based diagnostic error (and sometimes also additional suggestions for interventions), 100 sources contained only suggestions (table 5) and one had both. Some sources reported more than one suggestion.

Table 5

Intervention suggestions

During the full-text review of the empirical studies assessing cognitive interventions, agreement between reviewers on the Outcomes Rating was substantial (κ=0.70). Similar agreement was obtained for the Strength of Conclusions (κ=0.70). There were three articles with disagreements that were resolved by discussions with a third reviewer. We categorised the intervention studies into one of three mutually exclusive categories: (1) Interventions to increase clinicians' knowledge and experience, (2) Interventions to improve clinical reasoning and decision-making skills or (3) ‘Get help’, interventions that assist clinicians with tools or access to other clinicians or experts. For each of these sections, we use the suggested intervention and background literature to first provide context, following which we discuss the tested interventions. The Outcomes Ratings and Strength of Conclusions ratings for each intervention article are included in tables 1–4.

1) Increase knowledge and experience

Diagnostic error could potentially be reduced by increasing physician's structured knowledge and experience, the essential basis of expertise.143 By definition, experts will tend to make the fewest errors, have the best degrees of calibration and excel in efficient diagnosis.50 ,144 ,145 Medical educators similarly agree with the concept of increasing experience as the key to developing expertise.143 ,146 ,147 The interventions in this domain are summarised in table 1 and are organised into the following three categories.

Training focused on specific content areas

An effect of training on diagnostic reliability is illustrated in radiology, where certain certification programmes are based on demonstrating competency. For instance, radiologists in the UK must review 5000 mammograms a year for certification, as opposed to 480 in the USA, which may in part account for the large difference in diagnostic accuracy noted between the two countries.54 In certain programmes, radiologists also receive additional training in cancer detection where they attend disease-related meetings, receive feedback on cancer detection rates and attend a 2-week course led by specialists at high-volume mammography screening sites.55 Similar measures, including regular peer review and participation in the American College of Radiology's RADPEER™ system, have been proposed for the USA.148

Interventions to increase the knowledge base of practicing clinicians through continuing medical education activities have generally not led to substantial improvement in measured performance.56 ,57

Interventions

We identified three formal studies of training interventions related to diagnostic error.12–14 One notable study was a highly content-specific intervention to improve recognition of subarachnoid haemorrhage. This low-cost training programme on sudden onset headache for community-based physicians reduced the baseline diagnostic error rate (12%) by 77% and improved interactions between neurosurgeons and local physicians.12

Simulation

The ability to provide realistic simulations through both scenarios and simulated patients offers the potential to improve skills in clinical reasoning63 and the opportunity to expose trainees or physicians to a greater number and variety of case presentations. Simulation is a well-established approach to improving manual, procedural skills, but has not yet been evaluated extensively in its ability to improve cognitive skills or decision-making related to diagnosis. It also remains to be demonstrated that simulation can replace experience in actual practice.

Interventions

We identified only two interventions in this domain, both involving trainees. Carlson et al16 demonstrated improved diagnostic accuracy by the combined use of simulation with a diagnosis support tool and Bond et al15 used simulation successfully to introduce the use of cognitive forcing strategies to emergency medicine residents.

Feedback as a way to improve expertise, calibration and error awareness

Deliberate practice, with immediate and focused feedback, is viewed as an essential prerequisite to developing expertise in any domain.144 ,145 Moreover, lack of feedback is a dominant factor that sustains overconfidence, thought to be a major factor in causing diagnostic error.1 A systematic review of feedback across all medical areas (not solely diagnosis) concluded that feedback improves performance in selected settings, especially if the feedback is intensive.65 Feedback is most useful if it incorporates instruction and information on why a given answer was correct or not.66 ,67 For example, psychology trainees improved their diagnoses if feedback provided details on why they were right or wrong.18 ,149 Using feedback to improve diagnostic performance has been most convincingly demonstrated in radiology through programmes such as the ‘PERFORMS’ system in the UK54 and the RADPEER programme in the USA.

Although clinicians received immediate and dramatic feedback on their diagnostic performance from autopsies, the rate of autopsies is declining.150 Local ‘Morbidity and Mortality’ conferences73 and creative new venues such as the ‘Web M&M’ series sponsored by the Agency for Healthcare Research and Quality (http://www.webmm.ahrq.gov/) are alternative venues where feedback is provided.151 In this spirit, Eva74 has advocated for incorporating diagnostic error review into medical school and postgraduate training. Alpert and Hillman discuss other types of data that should be part of such feedback, such as the results of professional audits, peer reviews and risk management programmes.68

Interventions

Our search yielded two studies on feedback to improve diagnostic performance.17 ,18 Both studies showed benefits of feedback on later diagnostic accuracy. The positive impact noted by Wood and Tracey18 was possibly explained by the provision of detailed feedback to trainees on the reasons their initial diagnoses were correct or not.

Category 1 summary

The empiric studies identified were positive, but generally used trainees and specific content limiting the ability to generalise the impact of results to actual practice.

2) Improve clinical reasoning

According to the currently popular paradigm, diagnoses are made by some interacting combination of intuitive, automatic processing (system 1) and deliberate, rational consideration (system 2).152 Interventions to reduce diagnostic error have been suggested in each of these areas, and many authors have advocated for the benefits of general training in clinical reasoning.1 ,23 ,47 ,49 ,53 ,75–78 The interventions in this domain are presented in table 2.

Improve intuitive processing: debiasing

Many, and perhaps most, medical diagnoses are derived intuitively, acknowledging that most conditions are common and present in typical, easily recognised, fashion. Coderre et al153 found that intuitive diagnoses are more likely to be correct compared with diagnoses derived by hypothetico-deductive reasoning, and this concept is also consistent with the substantial literature regarding expertise.

Experts in the field of naturalistic decision-making emphasise that intuitive judgements cannot be taught because they emerge subconsciously from the amassed experience of the decision-maker and his or her ability to access this knowledge instantaneously and effectively.48 ,50 However, others have argued that intuition can be encouraged, strengthened and improved.51 ,101 ,151 Brawn highlighted several strategies to encourage use of intuition such as showcasing examples of how intuition was used in discovery and insight situations.98 Noddings and Shore100 suggest that intuition can be developed by first acknowledging intuition and its role in decision-making, demonstrating its capacity and successes, and by sharing how intuition is used, especially by experienced role models. Hogarth151 recommends a series of novel educational interventions to teach and improve intuition, including creating increased motivation to learn by exposure to one's own errors and constantly seeking to improve one's learning skills by reviewing and revising skills in observation, sense-making and hypothesis testing.

Croskerry and others have argued that clinicians would make fewer errors if they learned the potential shortcomings (biases) of intuitive decision-making so as to understand and avoid them.76 ,77 ,91 Interventions to avoid both affective bias (engendered by our inherent discomfort with certain types of patients or interactions) and cognitive bias (due to the known shortcomings and pitfalls of subconscious thought) have been suggested.

Similar debiasing interventions were suggested by Fischhoff85 and included: (1) warning about the possibility of bias; (2) describing how the bias distorts good decisions; (3) letting the individual make a bias-related judgement error and giving them feedback; and (4) repeating these cycles with extended coaching. Larrick87 reported an example of successful debiasing by keeping it focused on a particular context and a particular bias.

Experimental evidence suggests that hindsight bias can be reduced by considering alternatives.154 In one such study, subjects were asked to choose between two answers to a difficult question,93 where some were asked to give the reasons they made their choice and others were asked to give reasons both for and against their choice. Considering both alternatives improved accuracy and reduced the tendency for subjects to be overconfident in their answers.92 Similarly, physicians evaluating a difficult test case were more likely to trust a diagnosis when asked to consider alternatives.154

Although debiasing is potentially attractive, several authors have expressed scepticism if this approach will work based on the intrinsic difficulty of changing the subconscious processing individuals use in decision-making.86 ,155 ,156

Interventions

Our search yielded two studies. Sherbino and colleagues19 tested an effort to improve clinical reasoning of trainees by teaching them cognitive forcing strategies to counteract biases. The study lacked baseline data (no measure prior to intervention) or a control group, and the results were generally negative. In addition, the reported retention of the cognitive forcing strategies that were the subject of the intervention was short-lived. Eva et al encouraged the use of combined strategies (pattern recognition plus deliberate consideration) in teaching students to read electrocardiograms (ECR), and found this improved their diagnostic performance in part by avoiding biases.20

Improving metacognition and reflection

Improving metacognition, the ability to reflect on one's own thought processes, is an appealing approach to reduce cognitive error.77 ,78 ,103 Metacognition could potentially alert clinicians to possible flaws in their reasoning and help detect errors. A related and widely endorsed recommendation is to practice reflectively,82 ,94 ,96 ,97 ,102 recently referred to as the diagnostic ‘time out’.83 Reflective practice promotes metacognition and incorporates four distinct elements: Seeking out alternative explanations, exploring the consequences of alternative diagnoses, being open to tests that would differentiate the various possibilities and accepting uncertainty. This process, essentially getting a second opinion from your own conscious mind, has the potential to avoid many of the inherent pitfalls of heuristic thought.82

Several tools have been suggested that might be helpful to promote metacognition and reflective practice, including Trowbridge's ‘12 Tips’ and Leonidas' ‘Ten Commandments’.83 ,157 Using a diagnosis checklist, by promoting conscious review and reflection, has also been advocated as a way to avoid pitfalls in clinical reasoning.106 ,157

Interventions

Two studies were identified. Mamede and colleagues found that conscious reflection decreased the tendency towards availability bias,21 and Coderre et al demonstrated that reflection on an initial diagnosis was helpful if the initial diagnosis was wrong, and did not lead to new errors if the initial diagnosis was correct.22 A limitation of both studies is that the additional time spent on problem solving may be what is driving the result, not conscious reflection per se. Also, both studies involved trainees in a laboratory environment, so that the positive results would have to be reconfirmed in practice settings. It is therefore inconclusive whether these techniques successfully reduce diagnostic errors.

Consider alternatives

A central element of reflective practice is reviewing alternative diagnoses, an approach widely endorsed as a valid approach to improved decision-making107 which we consider separately in this section. In this approach, clinicians should invoke what has been called the universal antidote, ‘Could this be something else?’ and use appropriate tests to exclude the alternatives, rather than ordering tests that simply confirm original suspicions.107 Others79 have also suggested that clinicians ‘jot down, in advance, outcomes that would support one's initial conclusions and also those that would disconfirm them’ or consider alternatives.111 A related strategy is to assume the perspective of an outside observer,86 prompting evaluation of the decision-making strategy that was used and whether or not it was flawed. Military planners have used ‘prospective hindsight’ to teach this principle: one looks into the future to see that the working diagnosis is not correct: What was missed and what else should have been considered?97 ,112

Interventions

Our search yielded one study that tested an intervention in this category. Wolpaw et al23 attempted to improve clinical reasoning and decision-making skills through a six-step training programme for medical students to express their diagnostic reasoning process. The impact of this technique on diagnostic errors is inconclusive since the study did not assess the reduction of diagnostic errors, but only assessed frequency and thoroughness of their skills in presenting a patient case. The study only measured the presence/amount of reasoning and not accuracy thereof, and so it is unclear how this intervention would improve diagnostic accuracy.

Improve rational processing

Rational, deliberate review and consideration combine the use of evidence-based knowledge158 ,159 with two normative approaches, the use of expected value decision-making160 to choose among a group of possible diagnoses and Bayesian analysis to incorporate test results in considering a single diagnosis. Kassirer et al47 describes the process of clinical reasoning as generating initial hypothesis which are then investigated by diagnostic tests and Bayesian analysis until an appropriate threshold (Treat, Don't Treat) is reached. Kassirer suggests that the essential skills of clinical reasoning can and should be taught to medical students from their first days,126 and reviewers have concluded that conscious review can be taught effectively.121 ,122 Trainees taught principles of evidence-based medicine are more likely to use Bayesian techniques to interpret clinical findings.158 In efforts to reduce surgical cognitive errors, Brannick et al108 oriented surgical trainees to Reason's major error types using an educational video and role-playing emphasising errors. Although actual surgical error rates were the same as in untrained controls after a month, attention to detail improved.108

Category 2 summary

We noted a major discrepancy between the breadth and enthusiasm for these interventions in the background studies, but a paucity of actual interventions. For all three categories, there is very limited evidence addressing diagnostic accuracy or errors. The studies identified involved trainees in laboratory-like settings, limiting the ability to generalise the findings to real practice.

3) Get help: use other people and decision support tools

Given the constraints of human cognition,78 physicians may be able to augment their innate cognitive abilities by obtaining advice and help from others. All of the tested interventions in this category are detailed together in online appendix C and were organised in the following categories.

Second opinions

Interventions

Several studies have demonstrated that second reviews of surgical pathology or cytology specimens find a small but important group of errors,24–28 and a growing number of healthcare systems now require second readings in case types known to have substantial rates of inter-observer variability. Most of these studies do not, however, include data on patient outcomes (table 3).

Second readings in radiology also improve test sensitivity. Duijm et al31 found that multiple independent readers (radiologists or technicians) increased cancer detection rates with only a slight decrease in specificity, and Kwek et al32 found that second reading increased cancer detection by 5%.

The impact of second readings has been mixed in other settings. Second reading of Emergency Room (ER) imaging studies was helpful in one study,30 but in another, besides identifying previously missed abnormalities, the second reading introduced new misinterpretations leading to inappropriate changes in management.29 Canon et al33 measured the impact of independent double reading of barium enemas and found no effect on the sensitivity of polyp detection and an increased rate of false positives.

Thus, the overall impact of ‘second opinions’ on diagnostic errors appears to be mixed. Sensitivity appears to improve in most but not all studies, but the second readings tend to introduce new errors that detract from the specificity of the diagnostic test. Results could potentially be both reliable and generalisable because of the relatively large number of cases reviewed in these studies, and the use of expert reviewers. Cost–benefit analyses will be needed to determine whether the costs of second readings and the seemingly inevitable increment in false positives are offset by the increased rate of case finding.

Groups and librarians

Groups can make better decisions than its individual members if the members are allowed to function independently.86 ,161 Diagnosing challenging cases within teams or with peers would take advantage of this strategy. A recent novel approach leverages the use of librarians who are experienced and skilled in identifying information, evidence, and knowledge relevant to diagnostic alternatives or testing strategies.129–131

Interventions

One study by Christensen et al34 studied team-based decisions. This was a well designed, controlled study, but the results were negative: performance did not improve by using the team.

A randomised trial of embedded clinical informaticians at one university demonstrated a positive impact on the clinical care provided,35 although self-reported perceptions were used in place of actual outcomes.

Decision support

Most studies of decision support tools have evaluated impact on process measures, user satisfaction and utility in a limited sense,141 and are not consistently positive. A systematic review of decision support systems in 1998 identified only a single study focusing on diagnosis,133 and in this study, using a decision support tool in an emergency room on patients with joint or bone injuries actually led to more missed fractures.134

Using linear prediction models (actuarial decision-making, algorithms) has been shown to yield better ‘decisions’ than most decision-makers, including experts, in a wide range of settings.111 Wedding and colleagues79 ,132 ,162 report that actuarial diagnosis was more accurate than clinical judgement in patients with neuropsychiatric conditions. However, clinicians tend to disregard advice from these tools or not use them even when they are readily available.163 ,164 The importance of embedding decision support in the physician's workflow has been repeatedly emphasised, for example, by incorporating decision support logic in computer-based order entry systems. A systematic review of this approach identified 11 controlled trials, seven of which reported improved professional practice141 on ordering diagnostic tests.

Hamm and Zubialde and more recently Schiff and Bates have called attention to many other ways in which the electronic medical record can enhance clinical reasoning.102 ,142 Besides providing clear access to the necessary data, good records help clinicians organise their thoughts, enhance collaborative thinking, enhance efficiency and promote feedback. Another promising type of clinical decision support enabled by electronic records is the graphic display of timeline data to assist in the interpretation of diagnostic test results and to help detect subtle trends.137

Interventions

A recent review identified 10 newer studies, each focused on a specific clinical condition135 and of these only four studies had positive results: one reported improved ability to detect and diagnose mood disorders in outpatients,136 two improved diagnosis in acute coronary syndromes36 ,37 and one evaluated diagnosis of acute abdominal conditions on a surgical service, which improved provider performance but not patient outcomes38 (table 4). Finally, De Simone et al39 describe a system that receives clinical information from the patient directly and synthesises that information to aid the clinician in diagnosing the cause of headaches. Overall, the studies were all sound and results seem to be generalisable by virtue of testing a range of subjects and case types.

Another approach to supporting the diagnosis of specific conditions in general is the Infobutton functionality, described by Cimino.40 The only available study of Infobuttons included subjects from varying levels (attending physicians, residents, medical students, nurses) who were mostly satisfied with the tool. The impact of the tool on diagnostic accuracy and patient outcomes was not assessed.

Computer-aided detection systems

Interventions

Five studies have examined the use of computer-aided detection systems to aid radiologic diagnosis. Peldschus et al165 studied the effectiveness of an automated computer-aided detection system for chest CT studies and found both new positives and false positives. Berbaum et al166 found that use of a computer-aided detection system in chest radiography could not counteract the satisfaction-of-search effect (being able to find additional defects beyond the first one) in 16 subjects. In another study, Kakeda et al demonstrated a significant beneficial effect of using computer-aided diagnosis support to help analyse chest radiographs.167

In mammography, Jiang et al168 found that computer-aided diagnosis reduced inter-observer variability, but in another study computer-assisted mammography interpretation had no beneficial effects on cancer detection and significantly increased the false positive rate of the studies and the biopsy rate.169 A recent commentary on computer-assisted detection noted that while use of this technology is increasingly the norm, the jury is still out on its utility.170 All of the intervention studies reviewed were solid in design and in the interpretation of the results and conclusions, but the ability to generalise is limited due to studies in just two domains (chest x-rays and mammograms).

Computer-aided interpretation systems

Interventions

Two studies focused on technology to improve ECG interpretation. Daudelin and Selker171 reported using an ECG-based acute cardiac ischaemia predictive instrument to improve triage decision-making in the ER. Olsson et al172 studied the use of an artificial neural network trained to automatically detect ECGs indicating possible transmural ischaemia and found that this decision support tool was effective in improving inexperienced interns' interpretation of ECGs.

General decision support tools for medical diagnosis

Computer aided decision support tools have also been developed to assist specifically with differential diagnosis. Anecdotally these tools succeed, in a small fraction of searches, in suggesting a difficult or obscure diagnosis that was previously missed. The clinician inputs the patient's key findings, and these programmes suggest possible diagnoses. Some programmes help refine these choices by further suggestions of questions to ask, findings to look for or tests to perform. Berner et al evaluated the first generation products (QMR, DXplain, Iliad and Meditel) using test scenarios and all the products were effective in providing useful suggestions.173 However, the correct diagnosis appeared on the suggestion list only half to three-fourths of the time, and all of the programmes generated a large number of extraneous conditions.174 Some of these initial products are no longer available, although DXplain has been maintained and updated.

Interventions

Of the many newer web-based decision support tools, ‘ISABEL’ has been the most extensively evaluated. Compared with first generation tools, ISABEL displays much improved sensitivity in both paediatric settings41 ,42 and in analysing adult case scenarios, in which the sensitivity approached 100%.43–45 ‘Google’ searching has also been evaluated in medical settings, but suggests the correct diagnosis in only 58% of difficult cases.46

Category 3 summary

Overall, the technique of ‘getting help’ during the diagnostic process may be beneficial. The use of decision support resources has been studied more extensively than any other intervention, and these approaches, if used, show promise in their potential to reduce diagnostic errors. More research is needed regarding the use of second reviews, teams and librarians.

Discussion

Reducing harm from diagnostic errors requires interventions to improve the cognitive processes that underlie clinical reasoning. We identified a reasonably large literature on potential interventions and organised these interventions into three categories: (1) Increasing knowledge and expertise, (2) Improving intuitive and deliberate consideration and (3) Getting help from colleagues, consultants and tools.

We found that most interventions in the literature were simply ideas or suggestions. Many of these are well conceptualised and widely endorsed, and seem ripe to be tested in experimental or real-world clinical settings. A major finding in each of the three categories was a large discrepancy between the broad and enthusiastic recommendations for the various interventions, but a relative paucity of actual trials. Of the few studies that reported true interventions, few included robust designs or metrics. Typically, the interventions involved an observational study design and measured outcomes before and after an intervention with a small number of trainees or clinicians and/or healthcare sites, without a control group.

Our findings also affirm that the science of outcome measurement in this area is underdeveloped. Educational interventions in particular are difficult to evaluate in terms of changing attitudes and behaviours in practice. One major issue is the difficulty of demonstrating that diagnosis can be improved by any approach in real-world settings. Definitions of diagnostic error are not standardised and error designations are typically subjective judgements, often confounded by hindsight bias. Measurement instruments and methods to evaluate cognitive intervention effects are not well developed. Additionally, because diagnostic error reflects the interplay of system-related and patient-dependent factors, the true effect of a purely cognitive intervention might be difficult to ascertain. All of these factors pose challenges in the design of future interventions in this area.

The major limitation of this review is the likelihood that we overlooked conceptual ideas to improve decision-making from both medical and non-medical fields. Medical diagnosis is essentially a special case of decision-making under conditions of uncertainty, and ideas for improving these decisions can arise from almost any discipline, including the social sciences, business fields and military scholars.

A clear challenge going forward is to identify the advances in these areas that might be applicable to improving the reliability of medical diagnosis. Despite the many shortcomings of these studies, our review identified promising ideas for reducing diagnostic error in each of the three major categories.

Increase knowledge

At the present time, disease-specific training is the only intervention that is both supported by evidence and seems implementable. In the future, simulation offers potential both in terms of teaching clinicians about diagnostic error and error-prevention strategies, as well as serving as a method to rapidly build expertise through exposure to many types of disease variants. Feedback also offers the potential to reduce errors by helping develop expertise. Feedback is also the key to reducing overconfidence, which in turn could open the door for clinicians to appreciate the possibility of their own errors and take actions to avoid them. Deliberate feedback is embedded in many approaches that seek to improve individual and team performance outside of medicine.

Improve clinical reasoning

Although some of the interventions to improve reasoning have been successful with trainees, most have yet to be implemented or evaluated in practice. Reflective practice and active metacognitive review may have great potential to reduce diagnostic error, and the tools to promote these practices need to be further developed and evaluated in practice. These approaches expand the number of conditions to be considered and effectively address many of the major causes of cognitive error, including context errors, framing bias and premature closure. However, the cost of trade-offs is not clear. For example, will the broadened consideration of alternative diagnoses lead to inappropriate or costly testing, divert attention away from the correct diagnosis or be deleterious in another way?

Get help

Decision support for diagnosis has the unique advantage that it can be implemented at the system level, without requiring some new skill or behaviour to be learnt by clinicians. Still, clinicians need to be willing to take advantage of these resources, and error reduction will critically hinge on how well the support functionality is incorporated into everyday workflow and how clinicians will deal with the specificity problem. Using informaticians, working more effectively in groups, taking full advantage of the comprehensive electronic health record and relying more on actuarial tools (algorithms) may be effective strategies. Second opinions and consultations bring fresh eyes to examine a case, a powerful and effective way to find and correct diagnostic errors.

Conclusions

In conclusion, there is a surprisingly wide range of possible approaches to reducing the cognitive contributions to diagnostic error. Not all the suggestions have been tested, and of those that have, the evaluations typically involved trainees in artificial settings, making it difficult to extrapolate the results to actual practice.

The field is immature and progress in reducing diagnostic error will require considerable research to evaluate the relative merits of these different ideas, refinements in the methodology of defining and measuring outcomes in preventing diagnostic error and harm, and leveraging advances in other aspects of medical decision-making and cognitive sciences that may make medical diagnosis more reliable.

Acknowledgments

We gratefully acknowledge administrative and literature research assistance from Ms Grace Garey, Mary Lou Glazer, Diane Martin1 and Wendy Isser.

References

Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

    Files in this Data Supplement:

Footnotes

  • The authors of this paper are solely responsible for its content, and disclosed no competing interests. The findings and interpretations in the paper do not represent the opinions or recommendations of the institutions with which the authors are affiliated, the Agency for Healthcare Research and Quality, or the US Department of Health and Human Services, Department of Veterans Affairs.

  • Funding This study was funded by the Agency for Healthcare Research and Quality (AHRQ) ACTION II Task Order #8, Contract No. HHSA290200600001 and in part by the Houston VA HSR&D Center of Excellence (HFP90-020).

  • Competing interests None.

  • Provenance and peer review Not commissioned; externally peer reviewed.