Article Text

PDF

Psychological contribution to the understanding of adverse events in health care
  1. D Parker1,
  2. R Lawton2
  1. 1Department of Psychology, University of Manchester, Manchester M13 9PL, UK
  2. 2Department of Psychology, University of Leeds, Leeds LS2 9JT, UK
  1. Correspondence to:
 Dr D Parker
 Department of Psychology, University of Manchester, Manchester M13 9PL, UK; Parkerfs4.psy.man.ac.uk

Abstract

In the past it has sometimes been assumed in health care that all adverse events involve individual incompetence and therefore blameworthiness, an assumption that is likely to hinder the development of comprehensive and honest incident reporting systems. At the same time, a full understanding of adverse events in healthcare systems requires that distinctions are drawn between a variety of error types, each of which has different origins and demands different strategies for remediation. In this paper a range of cognitive biases identified by psychologists is described. Examples are given of these biases, which are naturally employed in trying to understand our own behaviour and that of others, and therefore affect our understanding of adverse events. It is suggested that awareness of these biases, which form part of our normal thinking, should help to avoid a narrow focus on individual culpability and facilitate a more sophisticated approach to the investigation of adverse events.

  • psychology
  • cognitive biases
  • human error
  • adverse events

Statistics from Altmetric.com

Interest in human error, broadly defined, is booming in health care. Raising awareness of the progress that has been made in developing typologies of human error, in the context of research in other industries, ought to be helpful to those trying to reduce error rates in health care. At the same time, a working knowledge of the social psychology of attributions and cognitive biases should be beneficial in alerting those who investigate adverse incidents to our natural tendency to explain events in predictable ways. The objectives of this paper are to outline the literature in human error types; to describe the way we explain others’ behaviour (attributions) and the cognitive biases that operate in our thinking about risk; and to show how these biases impact on (1) the way we analyse and learn from accidents and (2) the way we react to those who have erred.

HUMAN ERROR THEORY

Over the last two decades the focus on understanding accidents in organisations has moved away from the identification of accident prone individuals, who can be blamed and weeded out, to a much more sophisticated understanding of the complexities of the interactions between individuals and systems.1–6 In terms of health care, the Department of Health now clearly acknowledges in policy documents that the blame culture that has characterised the NHS does not contribute to the understanding and management of medical error.7,8 A more sophisticated approach is needed. As an initial step it is important to acknowledge that to err is human.9 We all make mistakes, and one of the common mistakes we make is to overestimate our ability to function flawlessly, sometimes under adverse conditions—of time pressure, stress, fatigue, and conflicting demands. Expecting errorless performance is simply unrealistic. Moreover, it is naïve to assume that all errors have the same underlying causal characteristics. Theories of human error developed from research findings in cognitive and social psychology laboratories and from observational studies of error in everyday life10 suggest that there are several broad types of error, or aberrant behaviour.

Much of the time our performance on everyday tasks is automatic, rapid, and occurs without conscious attention. Routine tasks are performed automatically, freeing up attention for other tasks and allowing us to do several things at the same time (such as driving). However, when something novel and unexpected occurs (say, a dog runs out in the road), attention is immediately focused and we take conscious control of the situation. Slips and lapses happen when we execute an action sequence wrongly, whereas mistakes happen when we are in conscious control mode and successfully execute a faulty plan. Whenever possible we try to use preprogrammed solutions of the “If–Then” type. This relies on a correct assessment of the situation. When our assessment is incorrect, we may apply the wrong stored solution. When the situation is totally novel, we have to devise a solution in real time and then a range of cognitive biases comes into play. There is a tendency to go with the first solution that comes to mind, and to discount evidence that discredits our initial analysis of the situation.

In a complex system such as health care, slips, lapses and mistakes are inevitable. It is almost impossible for a system to put in place defences against all possible errors. In the highly technical systems evident in much of modern health care, the operator is not in direct control but supervises the operation of automated processes.11 Often, the systems are so complicated that the operator cannot be expected to have complete knowledge of what the system is doing. Given that even the most competent individual will make errors from time to time, the occurrence of accidents in such systems can be regarded as normal. Moreover, it has been suggested that it is no longer feasible to defend a system against individual unsafe acts because the concatenation of errors likely to lead to an accident in complex systems cannot be predicted.12

Violations are a noticeably different type of aberrant behaviour. They are deviations from rules, protocols or norms, and always have an intentional component. Violations are not mistakes in the true sense of the word, but deviations from the prescribed best/correct way of performing a task. Several types of violation have been specified13:

  • Routine violations occur when skill and experience leads someone to think the rules don’t apply to them.

  • Situational violations occur when the situation necessitates rule violation, for example, there is simply not enough time to carry out the prescribed checks.

  • Exceptional violations arise when the rules in place are not able to deal with a novel situation.

Violations are of particular interest in an organisational context where the writing of rules is one method used to prevent mistakes and control practice. Although violations represent the intended circumvention of prescribed best practice, even then any harm resulting is almost always unintended. The cases where violation does lead to intended harm are best described as sabotage and represent the most extreme and worrying form of aberrant behaviour (rare cases in the UK include Beverly Allitt, a nurse who intentionally poisoned children in her care and Harold Shipman, a family doctor and murderer of many elderly female patients). Thankfully, cases like these are very rare and are outside the scope of this article, although it should be noted that a debate about the monitoring of mortality rates to detect this kind of illegal behaviour is currently underway.14

In the health care sector rules (broadly defined) are often less rigid than in other high risk organisations. For example, clinical guidelines aid decision making but may not require strict compliance in all cases. There is, as yet, little consensus about what behaviour represents a violation within a healthcare system because a number of different types of rules exist—for example, protocols, guidelines, policies, care pathways—and their status varies from one healthcare organisation to another. Nevertheless, it is widely acknowledged that clinical guidelines often fail to make the impact on practice that is required.15

Each of the error types mentioned above (slips, lapses, mistakes, violations) requires different strategies for remediation. Better system defences, both physical and administrative, can help to minimise slips and lapses. Improved training and rigorous checking procedures can prevent some mistakes. Good quality guidelines, effective implementation, and the provision of necessary resources and support are all important in promoting compliance and thus avoiding violation.13,15 Table 1 refers to an example remediation strategy for each type in the area of anaesthetics.

Table 1

Error types and suggested remediation strategies

Thompson16 described how oxygen and/or anaesthetic supplies can readily become disconnected during surgery and suggested that the imposition of national and international standards and the use of disposable breathing systems have made such occurrences less frequent. Here the design of equipment has improved and reduced the opportunity for slips and lapses. Cooper et al17 examined critical incidents in anaesthesia and found that 82% of incidents involved human error; they suggested additional technical training and improved supervision as two of the most useful ways forward. Additional training serves to lessen the chance that a mistake will be made, and better supervision improves the chance of recovery if one is made. Mindful of the need for prompt corrective action when critical incidents do occur, Eaton et al18 suggested the development of anaesthesia action plans which specify a behavioural plan that should be followed and, as such, represent a move away from knowledge based and towards rule based behaviour.

BLAME CULTURE

Until recently there has been a tendency in the healthcare system to assume that all errors involve individual incompetence, and that retraining and monitoring are the keys to improvement. This assumption of incompetence, and therefore blameworthiness, is problematic because it mitigates against the success of any incident reporting system designed to identify priority areas for improving patient safety.19,20 Fortunately there is now clear movement within the NHS towards considering error from a systems perspective, using root cause analysis to identify both proximal and distal errors in the system.21–23 For example, in the UK the National Patient Safety Association (NPSA) is trying to promote an open and fair culture in hospitals, encouraging health professionals to report incidents without fear of personal reprimand. Evidence from other industries shows that, while focusing on the individual at the sharp end offers a relatively easy and psychologically satisfying option, much is to be gained from a more thorough and penetrating investigation.

Douglas suggests that identifying scapegoats in the event of an accident serves a defensive function.24 A belief that the risk lies in the individual pilot, nurse, doctor, control room operator, or train driver means that, once that person is removed from the system (for retraining, by transfer or dismissal), the risk is eradicated. On the other hand, attributing the cause of an accident to ongoing organisational deficiencies such as poor communication, poorly designed equipment, or inadequate training offers little comfort to those potentially at risk in the future (colleagues and clients) unless those deficiencies are addressed swiftly and comprehensively.

ATTRIBUTIONAL PROCESSES, COGNITIVE BIASES, AND BLAME

For the systems approach to incident/accident analysis to work well, a distinction must be made between failures that arise inevitably in a complex system and those that are the result of deficiencies that are open to improvement. Making this distinction is made more difficult by the natural tendency to take mental shortcuts, using heuristics or rules of thumb to understand our own and others’ behaviour (that is, making attributions). These shortcuts have been extensively studied by psychologists who have described a range of biases that affect our everyday thinking.25–27 In the following sections some of the principal biases will be described.

Fundamental attribution error

First described by Heider in 1958, the fundamental attribution error is the tendency to focus on dispositional characteristics (such as personality, intelligence, status) in explaining the behaviour of others and situational factors in explaining our own behaviour.28 The explanation for this is quite straightforward and depends on what is salient from the viewpoint of the observer. In observing another’s behaviour, the person is salient. In observing one’s own behaviour, the situation is salient.29 Although not a factor considered in the medical literature on blame and adverse events, the fundamental attribution error has been shown to operate in real life situations—for example, when jurors make decisions about culpability.30

Belief in a just world

Another bias in the processing of information about adverse events is the tendency to view the world as a just place in which we “get what we deserve”. To believe otherwise is to admit that we, too, are vulnerable to chance outcomes. Research findings suggest that the more serious the consequences of an incident, the more likely we are to judge the behaviour of the individual who erred as inappropriate.31,32 Caplan et al31 investigated the effect of outcome on physician judgements of appropriateness of care. One hundred and twelve practicing anaesthesiologists judged the appropriateness of care in 21 actual cases. The outcomes were manipulated so that they were presented as either temporary or permanent, while keeping the physician’s behaviour the same. The study showed that, when the outcome was changed from temporary to permanent, ratings of the appropriateness of the care given decreased by 31%. Meurier et al used attribution theory to demonstrate that, when reading scenarios about errors with serious and minor consequences, nurses also attached more importance to the error if the outcome was severe.32

In a recent study we showed that the behaviour of healthcare professionals was rated as more risky and inappropriate, and their responsibility greater, when the outcome of an adverse incident was more serious.33 This finding goes beyond the earlier literature which found that judgements of appropriateness were less favourable when the outcome was more serious. We found that judgements of responsibility (that is, blame) are also associated with outcome. This finding supports early research claiming that the consequences of an action affect the attributions of responsibility for that action.34,35 Where information is available about the behaviour in terms of precautions adopted and/or reprehensibility, this also has a strong influence on attributions. Our study showed that violations, where a protocol or guideline had been ignored, were deemed to be more blameworthy than either error or compliance with the protocol or guideline (unsurprisingly), irrespective of outcome.33

We can therefore expect that, in situations where the consequences are serious and where behaviour deviated from approved methods of working, colleagues and victims will be less sympathetic and will tend to blame the perpetrator. This tendency to blame has serious psychological sequelae for those trying to come to terms with the consequences of their own behaviour.36 Goldberg quotes from a physician recalling his own feelings on dealing with a mistake:

The drastic consequences of our mistakes, the repeated opportunities to make them, the uncertainty about our own culpability when results are poor, and the medical and societal denial that mistakes must happen all results in an intolerable paradox for the physician. We see the horror of our own mistakes, yet we are given no permission to deal with their enormous emotional impact…The medical profession simply has no place for its mistakes”.

Coping strategies

Psychological theory suggests that the strategies we use in dealing with problems, including error, are of two main types: problem focused coping strategies, which include information seeking and problem solving, attempt to deal with the problem itself, whereas emotion focused coping strategies (such as denial, giving vent to negative feelings, and trying to come to terms with an error) attempt to deal with the negative emotions aroused by the problem.37 In relation to medical error, both types of coping are probably needed. Research investigating the coping responses of doctors who have made mistakes38–40 in different practice areas has identified a number of common themes that mitigate against recovery and future improvement. These include a reluctance to discuss mistakes with colleagues, emotional problems, and ineffective coping responses such as denial or other blame. Christensen et al39 suggest that emotional coping strategies should be used by physicians in dealing with the long lasting emotional responses (including fear, guilt, anger, embarrassment, and humiliation) that follow from their mistakes. The importance of emotional coping strategies such as personal validation, reassurance, and professional reaffirmation in coming to terms with the mistake and the need for support by colleagues in dealing with the consequences has also been emphasized in the research literature.40 Support from colleagues may be a very helpful aspect in dealing with a serious mistake, but is it forthcoming? Anecdotal evidence suggests not.

In an article in the BMJ, Wu41 describes his experience of error.

When I was a house officer another resident failed to identify the electrocardiographic signs of the pericardial tamponade that would rush the patient to the operating room late that night. The news spread rapidly, the case was tried repeatedly before an incredulous jury of peers who returned a summary judgement of incompetence”.

Unrealistic optimism

When someone around us does make an error that has negative consequences, another way in which we may cope is by denying personal vulnerability to the same sort of negative outcome. The disbelief of the doctor’s peers described above, reflecting a professional denial of the fact that everyone makes errors, can also be explained with reference to biases in our processing of information about risk. There is now strong evidence in a variety of groups, including drivers,42 heart attack patients,43 and motorcyclists,44 of unrealistic optimism about relative risk—that is, in comparing ourselves with similar others (such as people of a similar age), we consider ourselves less at risk of a negative event (such as a heart attack). This bias in information processing is thought to offer a partial explanation for behaviour that occurs despite knowledge of the associated risks.45

Illusion of control

A further cognitive bias reported in the literature is known as the illusion of control. Like unrealistic optimism, illusion of control has an effect on the processing of information about the probability of encountering a negative event.46 However, illusion of control locates the source of the expected outcome in terms of personal control. It involves the tendency to believe that we have more control over our own behaviour and over the situation than is actually the case. In other words, a nurse may feel less vulnerable than others to error because she considers herself to be more experienced, skilled, or efficient than her colleagues. According to research, very few drivers (1%) consider themselves to be worse than average drivers—a statistical impossibility.47

The cognitive biases outlined above serve to minimise our sense of personal vulnerability to negative events and to foster an unsympathetic response to individuals who do make errors. Moreover, awareness of the information processing biases outlined above could profitably be included in risk communications, in interventions targeted at reducing risky behaviours, and in incident/accident investigations.

Key messages

  • Everyone’s thinking is affected by a range of cognitive biases.

  • Cognitive biases include heuristics or “rules of thumb” that help us to understand behaviour.

  • Use of such rules of thumb can lead to a tendency to blame the individual when negative events occur.

  • The tendency to blame makes achieving a blame free culture more difficult.

  • There is a need to be aware of these cognitive biases and to guard against their operation.

As well as helping us to understand the biases that operate when explaining our own and others’ behaviour, the concepts of attribution theory can be used to help us think clearly about responsibility in the event of an incident. Attribution theory might usefully be employed in the identification of the few poor nurses and doctors who are involved in a disproportionate number of negative events. The theory outlines the principles of consistency, distinctiveness, and consensus as useful in describing and understanding behaviour.48 In practical terms, if a nurse or doctor repeatedly makes errors (high consistency), those errors take different forms and occur in different situations (low distinctiveness) and are errors that other people are unlikely to make (low consensus), then it is more than likely that this pattern of errors reflects some problem with the individual. The same constructs can be used to indicate where blame is not appropriate. For example, in investigating the Ladbroke Grove train crash,49 the burden of responsibility might have been placed on the driver himself—a relatively inexperienced individual. However, it soon became clear that the driver was not alone in passing signal 109 at danger (high consensus). It was also obvious from previous records that the driver had been an excellent trainee and was not prone to errors in different situations (low consistency and high distinctiveness). The investigation of this accident therefore required a focus on the signalling system, the track layout, and the warning systems in the Ladbroke Grove area in addition to the training, route knowledge, and well being of the driver.

CONCLUSIONS AND IMPLICATIONS

This short review of the literature on types of human error has shown that the errors made by the very small minority of healthcare professionals deemed incompetent are simply too few to account for the large numbers of errors recorded in recent studies.50 It is now widely recognised that errors are a consequence of the systems in which humans work and the way they are “wired up” to do the job. We have outlined the importance of the attributions we make and the cognitive biases that affect our thinking. In understanding error we need to be aware that everyone is lazy in the sense that they prefer not to waste information processing resources on routine tasks, but instead rely on heuristics or “rules of thumb” in understanding their own and others’ behaviour. We have shown how these heuristics or cognitive biases, which are universal and part of normal thinking, may influence our understanding of negative events in health care. We have shown how biases can influence the ways we react to those involved in an adverse event. Raising awareness of the operation of these biases should help to avoid a narrow focus on individual culpability and facilitate a more comprehensive and sophisticated approach to incident investigation.

REFERENCES

View Abstract

Request permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.