Article Text

Download PDFPDF

Role of cognition in generating and mitigating clinical errors
  1. Vimla L Patel1,2,
  2. Thomas G Kannampallil1,3,
  3. Edward H Shortliffe2
  1. 1Center for Cognitive Studies in Medicine and Public Health, The New York Academy of Medicine, New York, New York, USA
  2. 2Department of Biomedical Informatics, Arizona State University, Phoenix, Arizona, USA
  3. 3Department of Family Medicine, College of Medicine, University of Illinois at Chicago, Illinois, USA
  1. Correspondence to Dr Vimla Patel, Center for Cognitive Studies in Medicine and Public Health, The New York Academy of Medicine, 1216 Fifth Avenue, New York, NY 10029, USA; vpatel{at}nyam.org

Abstract

Given the complexities of current clinical practice environments, strategies to reduce clinical error must appreciate that error detection and recovery are integral to the function of complex cognitive systems. In this review, while acknowledging that error elimination is an attractive notion, we use evidence to show that enhancing error detection and improving error recovery are also important goals. We further show how departures from clinical protocols or guidelines can yield innovative and appropriate solutions to unusual problems. This review addresses cognitive approaches to the study of human error and its recovery process, highlighting their implications in promoting patient safety and quality. In addition, we discuss methods for enhancing error recognition, and promoting suitable responses, through external cognitive support and virtual reality simulations for the training of clinicians.

  • Complexity
  • Human error
  • Medical education
  • Patient safety
  • Safety culture

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction: complexity, human cognition and error

Complexities of the clinical work environment are well documented,1 and are known to affect the management of care,2 ,3 clinical workflow,4 tasks,5 errors,6 interruptions7 and information or task overload.8 Such complexity is further heightened in acute care settings by the distributed nature of information5 ,9 ,10 and the challenges in effectively using health information technology (HIT).11 A commonly described, unwanted manifestation of complexity in clinical settings is error. Observers suggest that human errors fall into two major categories: (a) slips that result from the incorrect execution of a correct action sequence and (b) mistakes that result from the correct execution of an incorrect action sequence.12 There is a substantial literature on patient safety that concerns itself with eliminating slips (eg, through point-of-care computer reminders, surgical checklists and mnemonics),13 ,14 which are often attributed to lapses in concentration, distractions or fatigue.

Despite a substantial literature on procedural errors in complex settings (eg, slips, mistakes), much less work has examined how errors may result from conceptual misunderstandings. A notable exception has been cognitive research that has focused on the heuristics and biases that affect diagnostic reasoning and decision-making.15 While such research has been remarkably productive,16 ,17 offering results in economics, finance, experimental psychology and medical decision-making, the precise role of heuristics and biases in wide-ranging real-world clinical settings has been questioned.18 ,19

In distributed collaborative environments (such as the team-based settings of modern clinical care), external tools support several functions, including minimising cognitive demands, serving as memory aids, offering decision or reasoning support and offloading tasks. However, many of these tools are poorly designed, creating challenges at the interface with clinicians as well as the clinical environment, leading to errors.20 While the effectiveness of HIT on improving clinical practice and patient safety is still evolving, its effect on clinical cognition is widely acknowledged.21

This paper examines the cognitive aspects of human error in healthcare that compromise patient safety. Cognitive functions such as perception, memory, attention, knowledge, action and inference are constrained and affected by social, cultural and organisational factors. To manage error effectively, systems in which people work can be adapted to their cognitive strengths and weaknesses, and can often be designed to ameliorate the effects of human errors. Understanding the underlying cognitive mechanisms of medical error is an important preliminary step towards reducing adverse events. Furthermore, results from the study of human error can also be used as probes to understand human memory, thought and action.22

The tendency to attribute causal blame for errors to individuals is still common,23 ,24 despite growing awareness that individuals alone are often not the sole source of error. Societal pressures and a litigious climate often override a more balanced approach to the explanation of error that considers contextual, cognitive and sociocultural components.11 While a detailed typology of the nature of errors helps in a conceptual organisation (see refs. 25 and 26 for detailed reviews), understanding the causal underpinnings of these errors requires an integrated perspective that accounts for human beings and their systems of work.

Error generation during complex work is to be expected, and error identification and correction (recovery) are to be encouraged. Both are integral to human work activities.27 Furthermore, errors that occur in complex social and cognitively challenging settings typically extend beyond the individual decision-maker. The sociocognitive approach discussed in this paper extends theories of distributed cognition28 and accounts for environmental and contextual elements when evaluating errors. Training for work in such environments, where repetitive tasks are common, requires understanding complex sequences of tasks and subtasks.29 The goal is to be efficient and to maintain the structural integrity of the tasks under circumstances of uncertainty and change (ie, maintaining an ecological resilience). Additionally, groups and individuals need to be trained to cope and adapt flexibly to unpredictable shifts in context (ie, adaptive resilience).30

As we show, people who are involved in the generation of errors are also often good at detecting and addressing them before deleterious effects can occur—an observation mirrored by researchers in the aviation and transport industries. Safety researchers in these domains have noted that the ‘total eradication of human error was quickly abandoned as an objective (being unrealistic from a simple theoretical viewpoint), and safety naturally evolved toward a more systemic perspective’.31

Error generation and recovery

An overview of cognition and error

Research on error has crossed domain boundaries—from error management in nuclear power plants,32 aviation27 ,31 and transportation services,27 to studies in medicine and patient safety.24 ,33 While these domains differ in important respects, theoretical foundations regarding errors are similar. Early studies of error detection and recovery focused on problem-solving tasks, where error recovery was interspersed with problem-solving activities consisting of a progressive phase and an evaluative phase. In the progressive phase, a problem solver works towards a task goal, while in the evaluative phase, the focus is on assessing the adequacy of a part of the problem that has been completed, including its accuracy and potential errors.34

While the detection of error requires significant cognitive resources, recovery is an even more complex process, where there is a reconstruction and reassessment of the original erroneous action. To illustrate this notion with a simple example, consider the cognitive tasks of a resident whom we observed working in the intensive care unit (ICU) with a patient admitted from the emergency department (ED) with upper gastrointestinal bleeding. This patient's ED record stated that he was a 49-year-old patient with hepatitis C who was hypotensive due to haemorrhagic shock. While evaluating his status during ICU rounds, the resident recognised an oversight from the ED that may have led to an unnecessary radiological test. No examination had been done for ascites even though cirrhosis with a distended abdomen could have been due to ascites, and a physical examination could have diagnosed the abdominal fluid, and the CT scan might not have been required. He further recognised another error regarding the patient's management. He had been given diazepam for irritability, and the resident corrected this error by proposing that the team stop the diazepam, try to verify the cause of his altered mental status and meanwhile to give prophylactic lactulose. The resident recognised that administration of diazepam was inappropriate in the setting of possible hepatic failure and, given the altered mental status, encouraged the use of prophylactic lactulose on the chance that the mental status changes were due to hepatic encephalopathy. Thus, recovery (ie, identification and correction) required the knowledge and skills to recognise a problem and, in turn, to intercede quickly to develop and implement remedial measures.6

As is highlighted in this illustrative example, domain knowledge is generally relied upon for error recovery.35 ,36 Furthermore, aviation researchers have shown that expert pilots made fewer errors under high-workload conditions. Although there is considerable evidence that experts are better than others in their knowledge organisation, performance efficiency and ability to recognise relevant features of problems,37 ,38 there is also evidence that experts make mistakes, especially in time-pressured settings.

Studies on medical problem solving have shown that highly trained clinicians, working in areas where they are experts, are prone to errors of premature closure,39 a type of cognitive error where physicians invoke a diagnostic hypothesis and fail to consider other reasonable alternatives. For example, if a young Asian male presents with leg weakness, heat sensitivity and eating difficulties, it may be tempting to make an erroneous quick diagnosis of hypothyroidism with bilateral paralysis rather than pursuing and ultimately recognising a case of less frequent and more complex hypokalaemic periodic paralysis.

Error recovery in clinical environments

With respect to clinical settings, there are remarkably few studies that have investigated the role of error recovery. In a preliminary study in a cardiothoracic ICU, an attending physician, a resident and a student were shadowed, and their discussion was audio-recorded for a total of 10 h.40 Analysis showed that experts (attending physicians) detected and recovered from 75% of the errors. Furthermore, 70% of the errors, which were identified, required expert knowledge for recovery, and were likely to have serious consequences if they were uncorrected. Residents detected fewer errors (65%) and corrected a smaller proportion of the detected errors (61%). Medical students detected the least number, and those that were corrected were mostly routine errors that did not require detailed medical knowledge. The experts’ abilities to monitor their own thought processes are likely to have enhanced their error detection and correction strategies.23

We conducted a series of studies in laboratory and real-world clinical settings to determine the relationship between expertise and error recovery in problem-solving tasks undertaken by single individuals and by teams. Our research can be framed along two dimensions: first, the environment, and second, the role of external cognitive support (eg, participation in a team or use of technology). The error-recovery processes were studied using: (a) controlled experimental problems in a laboratory using simulated cases,6 (b) virtual 3D computer-based environments using subject-controlled virtual avatars and cases,41 (c) real clinical settings using simulated patient scenarios (a seminaturalistic approach)33 and, finally, (d) in a naturalistic setting during clinical rounds.42

In a laboratory-based study (created, piloted and executed with assistance from clinical collaborators), physicians were asked to solve two intensive-care cases with intentionally embedded errors. The cases differed in complexity; the simple errors could be detected with only a single-step inference along with factual knowledge during problem solving, while the complex errors required the integration of disparate data with factual knowledge. The findings were unexpected; none of the experienced physicians detected more than half of the embedded errors. This showed the limitations of individual cognitive processing and the possible role for external cognitive support (eg, in the form of HIT tools or interaction with a team) for the identification and correction of errors.

In order to evaluate the role of clinical teams in providing external cognitive support, we conducted a similar study where two clinical cases with embedded errors were presented for evaluation and discussion at clinical rounds. Five team-rounds in a medical ICU were audio-recorded and analysed for team communication, interaction, error identification and recovery. The results showed that team members collaborated by recognising and correcting each other's errors in an iterative fashion, as they jointly evaluated patient-management plans, leading to better overall performance in which over 65% of the errors were corrected.33 This improved performance can potentially be attributed to the shared cognitive load among team members and the consistent ‘cross-checking’ by team members as the discussions progressed. However, extended discussion and elaborations paradoxically led to new errors (n=16), not all of which were identified (n=10) or corrected (n=9). While teams serve as good safety nets in detecting and correcting errors, unmonitored and lengthy discussions can increase the possibility of unintended negative consequences through the generation of new errors that may not be detected.

This study was extended to evaluate three morning rounds (approximately 9 h) with recording of 26 team interactions at the bedside. We found that 77% of errors during team discussions were identified and corrected,42 where the errors that were corrected were ones that had implications for the immediate management of the patients. On the other hand, 23% of errors that were identified, after the fact by our clinical collaborators, had not been detected during morning rounds. It appears that teams working at the bedside in clinical settings tend to optimise performance (ie, finalising decisions in a very short period of time) with little room for discussion of any mistakes and little opportunity to learn from errors, even when they have been corrected.

Our approach to studying error has been driven by our desire to develop a distributed cognitive framework28 for studying errors. The higher cognitive load and the dynamic nature of the work activities in critical care settings are likely to increase the attentional focus exclusively to the task at hand, potentially reducing the raw number of errors (similar to the results from studies in aviation, where fewer errors were found in high-pressure situations).31 However, these errors are less likely to be identified and corrected, often requiring additional safety nets to prevent catastrophic patient outcomes (a notion known as resiliency). Resiliency may be achieved through the addition of fail-safe mechanisms that provide the necessary oversight, preventing errors from propagating and compounding into major adverse events and patient harm.43

Deviations from protocols

Standard protocols and guidelines help in providing consistency of care. In order to adapt to any complex environment, however, deviating from the standard procedures is sometimes necessary. Thirty trauma cases were audio-recorded in a level-1 trauma centre. Among these, 152 deviations from standard protocol were identified and classified as either errors or suitable innovations (ie, beneficial deviations from the typical norms of practice). For example, when attempting to evaluate an ED patient with a head injury, a resident noticed that the patient had a high Glasgow Coma score, a week-old wound on the leg, and a high temperature. The resident did not simply perform the guideline-required head X-ray, but deviated from the protocol by first requesting blood cultures. The problem was resolved after the culture result showed acute bloodborne infection and appropriate antibiotics were instituted.

We found that trauma physicians (experts), when they deviated from standardised protocols, made significantly fewer errors when compared with first-year and second-year residents (comparative novices).44 The deviations occurred under conditions of complexity, as well as in the presence of high levels of uncertainty about the patient's condition. These deviations were not all procedural; some related to strategic planning (proactive deviation), whereas deviations by the novices were mostly procedural in nature and reactive to specific events that occurred during patient management (reactive deviation). This study provides supportive evidence for the claim that deviations from protocol do occur in critical-care environments, but that not all deviations are errors; some reflect deliberate and appropriate actions or judgements by expert clinicians under atypical conditions.

In summary, when regular or standard patterns do not fit or match the current problem, possible alternate ideas get generated. This is the process of innovation, and innovation is not possible without deviations from what is ‘usual’. Given the myriad of standardised guidelines that are extensively used across clinical settings, this conceptualisation of ‘deviations as innovations’ provides a new lens for interpreting atypical actions rather than merely categorising them as errors.45 Furthermore, such innovative deviations are learning opportunities that may contribute to our knowledge base or to revisions of protocols so as to allow for such circumstances should they arise in the future.

Distributed framework of error generation and recovery

To describe the nature of error generation and recovery in dynamic clinical environments, we have proposed an analytical framework that is partly motivated by Rasmussen's46 characterisation of error as a violation of the bounds of acceptable practice norms (figure 1). An error is initiated by a violation of the bounds of safe practice, as shown by the first boundary in the figure. Before any serious harm occurs, there is a time period during which clinicians may detect and recover from this error (referred to as a ‘near miss’). If the correction or recovery does not occur, the error proceeds to the stage where it violates the second boundary, resulting in an adverse event. Some of the clinician behaviours we have observed in the practice situation are outside the accepted (culturally defined) bounds of safe practice. Figure 1 highlights how these observed behaviours occur during the evolution of errors. Our recent studies have accordingly focused on the two boundaries shown in figure 1: the transition from safe practice to an error (boundary 1) and the transition from a near miss to an actual adverse event (boundary 2). Evidence shows that it is these boundaries that are fundamental considerations in our understanding of the nature of error and the promotion of patient safety. In addition, the cognitive components of these transitions are central to our efforts to understand error and its mitigation. Although this description of boundaries does not correspond exactly to Rasmussen’s47 notion of boundaries (constraints) on the space of possibilities in a complex workspace, his ‘boundary of acceptable state of affairs’ does relate to the notion of a perceived violation as an opportunity for recovery.

Figure 1

A framework for error detection and recovery.

Using this characterisation, a system is in a safe or desired state when it has not crossed beyond the initial boundary of safe operation. When a system crosses boundary 1, it is in an unsafe or undesired state and, if it crosses boundary 2, an accident or incident will result. The movement of systems across the boundaries is inevitable in busy work environments where the pressure to achieve objectives and goals, such as providing quality of care within tight resources and operating constraints, leads to systematic migration towards the undesired boundaries, and sometimes across them. Safety interventions must therefore focus on preventing errors to keep a system from crossing boundary 1 and on managing errors by making it possible for a system to return to safety after temporarily moving into the ‘near miss’ region. The implication is that clinicians must know how to detect quickly that a patient's situation reflects an unsafe or compromised state and to take steps to return the patient to a safe state, even when an adverse event has actually occurred (beyond boundary 2).

Consider this actual example of a medication overdose that was observed during the care of a psychiatric patient in the ED.48 The error occurred due to misunderstanding or limited knowledge regarding the proper dose of a drug (an anticonvulsant agent, Lamictal). The drug and dosage were mistakenly interpreted from a note in the patient's old chart and then handwritten on a whiteboard by the ED psychiatrist, where the letter ‘l’ at the end of ‘Lamictal’ was erroneously interpreted as the number ‘1’ so that ‘Lamictal 200 mg’ was incorrectly read as ‘Lamicta 1200 mg’. Due to the increased workload in the already busy unit, complicated further by shift changes, the incorrect dose interpretation was overlooked by both the psychiatry and general ED attendings and residents.49 Subsequently, although the administering nurse and the responsible pharmacist did recognise that a very large dose had been prescribed, they failed to question the dosing (even though the dosage form of the agent meant that a large number of pills had to be given to provide a full 1200 mg). This failure to verify and question unusual dosing before administration of the drug represented a violation of a boundary of safe practice.

The error was more than a mere slip because the mistake was noted, but accepted as correct even before the agent was administered. Recovery from such an error (ie, from misinterpreting the dosage due to poor handwriting or trusting the unusual dose request) might have occurred if the administering clinician or the pharmacist had re-evaluated or questioned the unusually high dosage prior to dispensing or administering the agent. In this example, the overdose effect was not detected initially, but was recognised later when the patient developed neurological symptoms (a tremor) and was placed under close observation after gastric lavage. This example demonstrates late detection and subsequent recovery from a violation at the boundary of safe practice after an adverse event has occurred. A better understanding of the possibility that such errors will occur could have led to training and practices to introduce resiliency so that earlier detection and recovery would have been possible.

Implications for quality and safety

From error intolerance to error recovery

The framework outlined in figure 1 provides opportunities for structuring future research on the nature of errors, the identification of unsafe situations and recovery from them. The error boundaries provide implicit opportunities for designing appropriate error interventions; for example, the strategies in the region of ‘near misses’ would be different from those in the ‘error recovery’ phase. In the case of ‘near misses’, strategies that help physicians to recognise or detect errors are also likely to help them to be cognisant of the potential for errors.15 In contrast, interventions to address the ‘error recovery’ phase are likely to be dependent on knowledge-based strategies or contextual support that can help physicians to identify, and recover from, these errors. For example, cognitive support systems aligned with the reasoning processes of individuals can provide prospective support to aid users in identifying the potential errors and in recovering from them.50 As was previously emphasised, the role of error detection and recovery increases with expertise.6 Shifting the focus from error intolerance to error recovery through training can potentially change the social context by relieving clinicians of the notion that errors are unacceptable and focusing, rather, on skills that lead to recognising and mitigating such errors.

Future directions for improving patient safety in the context of error recognition and management should involve development of cognitively plausible, virtual training environments that simulate real-world patient-care situations. Such environments have to be developed with significant collaboration between clinical experts, cognitive scientists, informaticians and computer scientists (see ref. 51 for a detailed review; also see refs. 41 ,52). While adoption of such training environments and their use is not widespread, the potential for them to foster learning and development of expertise has been well established.

Errors as opportunities for learning

Seifert and Hutchins53 studied a distributed cooperative system where there were frequent errors, along with their successful detection and correction. The distributed system (in this case, members of a team) provided a robust mechanism for the identification and correction of errors by providing just-in-time learning and allowing the team to maintain a high level of overall performance. Similarly, clinical environments can be cooperative environments where novice physicians (eg, residents or medical students) can learn on the job, and the distributed support system (eg, HIT tools, team members or a supervising attending physician) can act as an ecological mechanism for detecting and correcting their potential errors.

Although it is often appropriate to encourage adherence to standard protocols or guidelines, effective training helps to ensure that physicians have the ability to recognise situations that require a creative adaptive solution and where the standard approach simply may be insufficient (or incorrect). Such educational approaches will be critical for a comprehensive evaluation of such cognitive skills and also to create teams that can recognise circumstances where deviations from a protocol may be necessary. The notion that error management can be viewed as a strategic mechanism for learning has been expounded in other fields. Computer-based simulations (eg, in a virtual world) can mimic the dynamic conditions of actual practice and create opportunities for learning by giving trainees real-time feedback, providing a closer connection between evidence and outcome. Such simulations provide low-cost learning laboratories, similar to airline flight simulators. We believe that learning based on these cognitive approaches can create a change in thinking that will subsequently influence future error identification and correction.

Conclusions

We have shown that error detection and recovery play a central role in the development of clinical expertise. This process is of substantive importance in distributed patient-care environments where teams and the environment can engender opportunities to learn iteratively on the job (eg, for trainees such as residents). Although we have not explicitly described the specific roles of the social and environmental context as they interact with human cognition, the framework and the examples presented consider a situated perspective, where cognition cannot be separated from its sociotechnical environment. The phenomena we have described are derived from specific clinical examples, and the extent to which such notions are generalisable can be further tested. Shifting the focus from error intolerance to error recognition and recovery can lead to better understanding of when and why errors occur, as well as how to manage them under complexity. We encourage a philosophy regarding error causality and attribution—what we call a new error etiquette—that accepts the inevitability of human error, emphasises learning from error and encourages vigilance, intervening when errors occur, discussing the problems openly and avoiding excessive criticism of others when errors do occur.

References

Footnotes

  • Twitter Follow Edward Shortliffe at @tshortliffe

  • Contributors VLP conceived this review and created a preliminary outline. VLP, TGK and EHS collected and organised the literature for this review. All authors participated in drafting the article and revising it critically for important intellectual content and gave final approval of the version to be published.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.

Linked Articles