Article Text

Download PDFPDF

Who do we blame when it all goes wrong?
  1. K Catchpole
  1. Dr K Catchpole, Quality, Reliability Safety and Teamwork Unit, Nuffield Department of Surgery, University of Oxford, The John Radcliffe Hospital, Headington, Oxford OX3 9DU, UK; ken.catchpole{at}nds.ox.ac.uk

Statistics from Altmetric.com

It is easier to perceive error than to find truth, for the former lies on the surface and is easily seen, while the latter lies in the depth, where few are willing to search for it. (Johann Wolfgang von Goethe, 1749–1832)

Catastrophic incidents usually occur as a result of a sequence of small events that accumulate to create a more serious situation.1 The resilience of healthcare delivery is such that most problems are of little or no consequence to the patient, and it is only in unfortunate situations that these otherwise insignificant problems combine in a discrete time and space to create a catastrophe. In such situations, it is easiest to blame the person making the last mistake than to indentify a sequence of events that came about through deficiencies in the systems of work that predisposed a fatal error. The unhelpful but enduring view, encouraged by the media, many healthcare managers and practitioners and even some safety scientists, is still that those who make such errors were not trying hard enough. Since arguably there is never only one person at fault, blaming individuals does not allow the identification and avoidance of error inducing states before they can cause harm again.2 In constructing a defence against clinical negligence based on the observation that some behaviours are non-conscious and automatic (and thus unavoidable), Toft and Gooderham3 present a legal argument for moving away from blaming care givers at the sharp end towards managerial responsibility for avoiding errors (see page 69). This may reflect an important shift in the management of clinical safety toward far greater sharing of responsibility for unintended harm, and a greater need to integrate law and the psychology of error.

Behaviour is mediated by a wide range of non-conscious phenomena, for which there is an enormous history of research, perhaps most famously illustrated by Pavlov.4 Involuntary automaticity is a convenient label to aid in the communication of a collection of psychological phenomena between disciplinary boundaries, much as the concept of situation awareness has been useful in providing a generic label for other phenomena in a range of human performance contexts.5 Essentially, involuntary automaticity describes our ability to perform extremely complex tasks very quickly and with little effort because the relationship between stimulus and response is so familiar that it has become automatic, often short-cutting conscious analytical processing. The disadvantage is that it also means that humans cannot always avoid error by trying harder or consciously directing their attention towards performing (or not performing) a certain task. For example, the preconscious processes in our brain allow us to recognise our own name in a noisy room without choosing to listen for it6 but might distract us from another conversation. Thus, a system that relies on the detection of exceptions will be predisposed to fail if it encourages automatisation by requiring the same task to be performed quickly many times, and with a response that is usually invariant. In such situations, it seems questionable to identify a doctor or a nurse as negligent when they have become trained to respond to a stimulus in a certain way. The example given by Toft and Gooderham illustrates a situation where multiple doctors made exactly the same error, suggesting a systemic cause rather than a negligent individual. In other industries, prosecution of such inadvertent human error is less frequent because it is recognised as a barrier to safety improvement. Yet, though we trust healthcare professionals to deliver high quality care, sometimes their practice is insufficiently mindful to justify that trust. So, how do we then identify care givers who are failing in their duties to protect the patient? What makes Toft and Gooderham’s argument of particular interest therefore is the observation that the practitioner is far more likely to achieve a successful defence against clinical negligence if they have identified a problem with automaticity, and brought it to the attention of the management prior to the catastrophe. This provides motivation to the practitioner to report deficient systems of work, delineating mindfulness from negligence and shifting the locus of responsibility toward those who determine and manage work systems.

Practitioners leave themselves open to allegations of negligence if they fail to raise safety concerns regarding automaticity prior to an event but may have a legal defence if they do. At a simple level, this might improve the rates of incident reporting that are well acknowledged to be especially poor among doctors.7 However, there is an impractical burden of identifying and reporting every opportunity where involuntary automaticity (or other predisposition to error) might occur, and since practitioners are not trained in the recognition of error-inducing states, their ability and motivation to identify systemic deficiencies is often poor. Doctors in particular take pride in the fallacy of being able to deliver high-quality care regardless of the state of the system around them, and openness can lead to negative recrimination by colleagues who are still enraptured by the mistaken view that only bad people make mistakes, or management who may be keen to distance themselves from liability. The divide between management and practitioner also creates a major challenge when changing systems of work. Since management do not have the sole responsibility for changing the behaviour of practitioners, and in most cases do not have the clinical experience to configure a solution for specific local or technical needs, healthcare practitioners need to participate actively in the rectification of problems, rather than “washing their hands” once an issue as been raised. Given the range, frequency and types of systemic problems,8 coupled with the clear view of blame that still pervades medical practice, is it negligent that minor incidents are quickly forgotten, recurrent problems become accepted as the norm, and more serious events are rarely reported or learnt from? Since little attention is usually paid to such events, they seem convenient and practical for all concerned to ignore. However, Toft and Gooderham suggest that withholding reporting on such events would hamper the care giver’s legal defence. This makes an important contribution to understanding the legal responsibility of the healthcare practitioner, thus offering an enlightening progression of the perennial safety debate about where the responsibility lies when patients are inadvertently harmed.

When things go wrong in healthcare, there are often two victims; the patient who is harmed, and the care giver who made the critical error. Toft and Gooderham raise the prospect of additional victims—hospital managers who are under time, financial, performance target, and perhaps public relation pressures, and are already predisposed to blame care givers, to implement quick but ineffective fixes, or who simply do not have the resources or support to deal with safety problems. A key question is whether this will encourage better relationships between management and practitioner through a clearer understanding of shared culpability, or enhance the divide by further entrenching fear and blame on both sides. Where it encourages the former—and in the long term, this must be the more rational approach—improvements in safety are far more likely to follow. Regardless of the medicolegal implications, which as yet remain untested, this paper also demonstrates a productive integration of the disciplines of safety, law and psychology. Indeed, a better understanding of how involuntary automaticity is encouraged or discouraged, whether it can be consciously manipulated, and how it relates to confirmation bias, attention, cognition, skill acquisition and learning, may have substantial legal implications for care givers and healthcare managers. Moreover, if this defence is tested in court, the legal argument that is likely to ensue will have a powerful influence on our understanding of the difficult relationship between clinical negligence and safety in healthcare.

REFERENCES

Footnotes

  • Funding: KC is gratefully supported by a Leverhulme Trust Early Career Fellowship.

  • Competing interests: None.

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Linked Articles