Article Text

Download PDFPDF

HUMAN ERROR
  1. T B Sheridan
  1. 32 Sewall Street, Newton, MA 02465, USA; t.sheridan{at}attbi.com

    Statistics from Altmetric.com

    Request Permissions

    If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

    In the 1970s and 1980s there was great interest among applied psychologists and systems reliability engineers in analysing accidents and “near miss” incidents in large scale systems where public safety was a primary concern. Efforts to define and develop taxonomies of human error were motivated by the meltdown at the Three Mile Island power plant near Harrisbug, PA, by the nuclear plant accident at Chernobyl in the Soviet Union, by the poison gas release at Bhopal, India, and by aviation’s most deadly crash of two 747 aircraft at Tenerife in the Canary Islands. Key to these efforts were the contributions of Professor Jens Rasmussen of the Riso Energy Laboratory and the University of Copenhagen in Denmark.1–3 Riso had been assigned the task of evaluating whether Denmark should build nuclear plants (it was eventually decided not to, although neighbours Sweden and Germany had made the decision to go ahead with nuclear power). Human error rather suddenly became a fashionable topic in the human factors or “cognitive engineering” field, and an early series of international meetings was convened by John Senders4 of University of Toronto. Subsequently, James Reason5 of Manchester University in the UK and Erik Hollnagel6 of Linkoping University in Denmark wrote well known books on human error. All the while the US medical community, while not ignoring patient safety, seemed reluctant to participate actively in such discussions of human error, the medical culture being oriented to avoiding public scrutiny of medical error for obvious reasons of exposure to litigation. Only in the last two decades has the medical community become open to taking a hard look at medical error. The anesthesiology patient safety movement led by Dr David Gaba of Stanford VA Hospital was one early push in this direction, and there were other patient safety efforts until finally the Institute of Medicine report “To Err is Human” pushed the door wide open.7

    Meanwhile, the human error theoretical community began shifting gears. Rasmussen’s classic paper reproduced here characterises that shift. The nature of the shift was the realisation that human error cannot be studied independently of individuals at work in their institutions, but must be done in the context of such work. As Rasmussen puts it: “errors cannot be studied as a separate category of behavior fragments” but must be viewed within “cognitive control of behavior”.

    From my own perspective, as a student of safety and human behavior with training as both a psychologist and a control engineer, “error” is simply a difference between an actual state and a desired state (“state” simply meaning a set of observable well defined variables). There is no control without measurement of state, since controls are applied to reduce that state discrepancy. Efforts to count errors as though there were a consensus on what exactly constitutes “an error” are open to considerable question. What one person considers “an error” another may regard as an acceptable deviation from standard practice because of special circumstances. Some deviations from the norm may be trivial while others may have very grave consequences. Most of what we consider errors in everyday life are compensated for before they have any noteworthy outcome. The threshold of what is error or not error, in other words, is quite arbitrarily dependent on the person making the judgment and the context of the behavior. “Unacceptable” and “acceptable” are words that need to accompany acts that are judged to be in error or not.

    CHANGING THE FOCUS

    Rasmussen makes the case for moving the focus away from “error” and towards the task and social context in which any “error” may occur. He does this in terms of three types of human–system analysis: (1) traditional task analysis and human reliability estimation, currently the most common way errors are studied in industry; (2) causal analysis of accidents after the fact, currently the most common form of error (or “adverse event”) analysis in medicine; and (3) design of reliable work conditions and sociotechnical systems. His plea is to do more of the latter.

    Let us consider each of these in turn and relate the issues that Rasmussen brings up to corresponding issues in health care.

    (1) Traditional task analysis works best for fixed procedures where discrete steps can be identified and omission of these steps counted (as errors). The errors with greatest probability and consequence are then focused upon and causal factors are sought. There are situations, however, where traditional task analysis falls apart. One such situation is when the sequence of steps is not fixed so that omission of a step cannot be identified. Another is when commissions occur: an action is taken that is not part of the appropriate procedure. Further, task analysis is hampered when there is no obvious denominator by which to normalize—for error probability to be estimated one needs to specify that out of x “opportunities” y “error” events occurred, so that y/x is an error probability estimate. But how to define an “opportunity”? Is it a unit of time, or a procedure, or a patient, or a healthcare worker? It is obviously an arbitrary call. While health care is proceduralized to some extent, the procedures are seldom rigid; there are many reasons for variation depending on circumstances, so that the above types of difficulties arise when objective task analysis is contemplated.

    (2) Causal analysis of accidents after the fact is common in medicine. However, as Rasmussen points out, the decomposition of causal sequences can be done in a variety of ways; there is no standard. Furthermore, each causal event has precursors going arbitrarily far back in time. What “stop rule” should be used? The tendency has been to focus on immediate or proximal causes such as missteps in a diagnostic or therapeutic procedure, while the more compelling cause may lie much farther upstream with caregiver training, hospital policy, or medical culture. The individual analyst or collection of persons involved, for example, in a morbidity/mortality review meeting may easily jump to conclusions about causes with which they are familiar and ignore other causal explanations which they are not equipped to recognize.

    (3) Traditionally, in doing behavioral tracing to the error causation, responsibility for error has been assigned to a person who is immediate in time and space. Rasmussen emphasizes how circumstances may not be under control of that person, and that there may be systematic but unseen traps set by upstream decisions. Short sighted stop rules often do not go far enough upstream. Indeed, the whole idea of responsibility for errors needs to be reconsidered. This problem is very evident in medicine today: many persons assert that the “blame game” is not helpful in getting to the systematic causes of errors. However, others assert that the standards of personal responsibility cannot be relaxed (a position the tort lawyers depend upon to make their livings). In any case, as Rasmussen points out, the aim is to “find conditions sensitive to improvements”.

    Rasmussen suggests that the design (or redesign) of systems based on analyses tied to particular paths of accidental events can be dangerous. We all know that no accident ever happened exactly the same way twice. Furthermore, the time course of events is unpredictable. For these reasons, Rasmussen argues, analysis should be carried out at a level higher than a particular accident sequence.

    ADAPTATION AND LEARNING

    Rasmussen makes a strong case for viewing the human worker—in the current context, the healthcare professional— as being in a continuous state of adaptation and learning. All human work is defined by task goals, task constraints, available tools, and capabilities of individual persons. But there is always some uncertainty with respect to one or more of these factors; circumstances continually change. Rasmussen discusses how necessary adaptation and learning to changing circumstances forces error to occur. This is normal.

    Humans seek solutions that take least effort and healthcare workers are no exception. Indeed, no one can blame this tendency for the demands of time and energy can be overwhelming. But the result is that workers see their tasks and perceive process quality in terms of satisfying immediate goals. If the patient makes it through the immediate procedure and there are no adverse events, the tendency is to forget many of the details of what happened. This tendency was obvious to the writer in the course of an observational study of complex surgical procedures at a major Boston hospital. If no serious adverse event occurred and the patient was not harmed, small safety compromising events were never recorded and were usually forgotten by the members of the surgical team. However, much learning can be gained from reflecting on events that did not go as planned.

    Most of us tend to overlook—and not even admit to—uncertainty. Medical students are at their worst in admitting uncertainty. They learn quickly that showing confidence is part of the culture of their education and practice. They are reluctant to admit uncertainty to their mentors and they dare not admit uncertainty to their patients. Their conditioning in personal responsibility is very strong, is reinforced by the Hypocratic oath, and continues through training and right into medical practice. This cannot but reinforce the “blame game”.

    Dealing with the familiar is usually preferred over dealing with the unfamiliar because errors tend to be associated with the unfamiliar. Instructions for abnormal conditions conflict with familiar methods and produce stress. Feedback (knowledge of results) is essential to learning, but feedback in patient care is often time delayed by hours or days, making learning difficult. Yet we all know that skill develops as a function of experience with a wide variety of situations, including abnormal situations. How experimental can the healthcare worker afford to be? Ethics demand conservative diagnosis and treatment, yet avoiding admitting to uncertainty and ignoring the occasional need for innovation can lead to robot-like execution of procedures in an unthinking, unreflecting manner which is surely not in the best interest of the patient.

    COOPERATION AND ROLE ALLOCATION

    Another factor considered in Rasmussen’s paper is the problem of cooperation and role allocation. In complex technology based organizations (such as hospitals), roles continuously adjust and boundaries shift. For example, handovers are continually occurring as nurses and physicians go on break or go off shift. Residents and medical and nursing students are continually being mentored, and being asked to perform certain tasks in lieu of the attending physician or regular nurse. Communication and cooperation is therefore essential. Yet the disinterested observer can spot frequent gaps in communication—important information not being recorded or given verbally in a handover; plans and intentions not being communicated by the surgeon to other members of the operating staff or ICU or emergency room team; records not being available at the time or place they are needed. While Rasmussen does not explicitly discuss communication within healthcare systems, the reader can easily translate his ideas into salient pointers for health care.

    Hospitals can be viewed as self-organizing systems where workers are simultaneously trying to satisfy high level criteria (such as improving efficiency and reducing costs) and low level criteria (getting the current task done). Rasmussen discusses how awareness and monitoring of violation of high level criteria is difficult at the local criteria level. He points out that activities which threaten the various conditions normally belong to different levels of the organization, and knowledge on how to fix things is often at a different level from where it needs to be fixed. He asserts that catastrophic system breakdown is a common feature of self-organizing systems. To avoid such catastrophic breakdown there is a need for the boundaries of adaptation to be clear and visible. This includes transparency of role allocation—making clear who is expected to do what. Technology shapes organizations “bottom up” by imposing constraints and, much like the roles of humans, the constraints of the technology must somehow be evident to all who interact with that technology. In the hospital this includes formal postings of assignments and written instructions, but more than that it means continual checking and confirming with one another that goals, plans, roles, and knowledge of how to operate the technology are clear, and that assumptions are shared. It means more emphasis on “safety culture”.8

    CONCLUDING REMARKS

    Although theoretical and not specifically directed towards patient safety, Rasmussen’s paper has a lot to say about safety culture. It asserts that past efforts to cope with human error in complex organizations where risk is a significant concern have focused too narrowly on task analysis, after-the-fact causal analysis and error attribution, and not enough on analysis of behavior of individuals and their interactions within the organization.

    REFERENCES