Article Text
Abstract
Background Cross-fertilisation of ideas across industries, settings and contexts potentially improves learning by providing fresh insights into error pathways.
Objectives and hypotheses To investigate six cases of human error drawn from disasters in the space, shipping, aviation, mining, rail and nuclear industries, and to apprehend similarities and differences in the antecedents to errors, the way they manifest, the course of events and the way they are tackled. The extent to which human intervention can exacerbate the problems by introducing new errors, how the cases are resolved and the lessons learnt were examined.
Design, setting and participants Exemplar disaster events drawn from a very large sample of human errors.
Results It is possible to identify and model a generic disaster pathway that applies across several industries, including healthcare.
Conclusions Despite differences between industries, it is clear that learning from disasters in other industries may provide important insights on how to prevent or ameliorate them in healthcare.
- safety
- root cause analysis
- human error
- health system
- disasters
- safety culture
- health care quality
Statistics from Altmetric.com
The word disaster has its roots in Greek and Latin. In each language, the dis- refers to a pejorative, a moving away from or an absence of, while astrum (Latin) or αστήρ (aster) (Greek) is a star or planet. Disasters are, by their etymology, “bad stars”.1 Major industrial and transport disasters, while tragic, come with the opportunity to learn about the complex technological, psychological and social factors that contribute to the course of the events. The health industry has often said “we are different” to other industries, an attitude that can effectively inhibit learning from the errors of others. To gain insights on how to reduce medical errors and improve patient safety, we compare information about the causes, paths and consequences of disasters in six other industries: space, shipping, aviation, mining, rail and nuclear. We show how analysis of exemplar cases can inform, provide insights and lead to improved learning through the cross-fertilisation of ideas, and in doing so turn “bad stars into “guiding lights”.
Learning from disaster pathways
Explaining disasters
Disasters are a rich source of information about the systems and subsystems that allow them to happen.2 To make sense of them, commentators have drawn on two key explanatory theories—normal accident theory (NAT) and high reliability theory (HRT)—to help account for and contextualise the causes and their consequences.
NAT states that accidents are normal, or inevitable, in complex systems where the number of components (including human, infrastructure and technological factors) (ref3:109) and ways in which they can be combined are, for all practical purposes, infinite.4 This theory distinguishes between these normal accidents and those accidents that are directly caused by component failures. The more closely coupled a system is (ie, how tightly bound the stakeholders are) can determine how swiftly a change in one variable cascades through the system, thereby affecting other variables. This is a major factor in the progress of events, whether accidents are prevented, and the system's ability to recover from errors and disasters.5 In healthcare, the best-known explanatory model for errors is Reason's metaphorical Swiss cheese hypothesis. Reason argues that an error can find its way through a system if the holes in a system's defences, just as in slices of Swiss cheese, line up.6
In some senses, HRT is the alternative face of disaster theory. HRT argues that organisations can prevent and recover from errors, and that highly reliable organisations exhibit common, identifiable characteristics that can be acquired by other organisations.7 These characteristics can help prevent disasters. They include a flexible structure, an emphasis on reliability, the effective use of rewards for safe behaviours, a valid and reliable process of sense making, heedfulness, mitigation of decision making, a big-picture view, a level of organisational redundancy, tight selection and high levels of training of personnel, reliance on collegiality, negotiation within a formal command structure, and a culture of co-operation and commitment to high standards.8–12
While these theories seem logically to be at different ends of the explanatory pole (either disasters are inevitable or they are preventable), theorists have tried to bridge the gap between them by arguing that major disasters can be avoided by understanding the complex factors involved in those accidents that seem inevitable no matter how hard industries try.13 For this learning to occur, organisations have to be able to make sense of processes such as the normalisation of deviance,14 which have been used to explain how small incremental departures from safety can lead to a major disaster.15 Small non-standard behaviours in patient administration and care can become routine, only revealing their impact when, for example, a health intervention is undertaken on the wrong patient.16
Exemplar disasters
Learning from such events is the purpose of disaster research. Each of the disasters we consider here has been examined and analysed by many experts from different perspectives: systems dynamics, human factors analysis, communication theory, structural and other engineering standpoints, safety science, and sociology and psychology, among other disciplines.17 18 Rarely are they analysed for their cross-industry value, and what they have in common is infrequently considered. We therefore combine these approaches to consider the way the essential elements (human, technological, procedural and communicative) contribute to disasters are similar across industries and to create a generic model of a disaster trajectory based on the pathways of exemplar events.
We used critical case sampling19 to select disasters in six industries: the Challenger Space Shuttle (USA), space travel; the M/S Herald of Free Enterprise ferry (Belgium), shipping; Air France Flight 4590-Concorde (France), aviation; Westray Mine (Canada), mining; Tangara G7 train (Australia), rail; and Chernobyl nuclear reactor (the then United Socialist Soviet Republic, now Ukraine), nuclear power. These cases were chosen because they are emblematic of things that can go wrong in their respective industries. Some share another characteristic: the industries within which the disasters occurred had either failed to learn from antecedent pretriggering sequences and precursor events or did not apply their learning subsequently. For example, the Challenger (1986) disaster was followed by the Columbia Space Shuttle explosion (2003). Chernobyl (1986) had a predecessor in Three Mile Island (1979), from which Perrow developed NAT. The head of the Inquiry into the Waterfall train disaster argued that if the recommendations that had been made (by him) 4 years earlier in the Inquiry into the Glenbrook rail disaster (1999) had been implemented, Waterfall (2003) could have been avoided.20
Table 1 presents the characteristics of each disaster. It compares the incident features; their initial, triggering causes, decision points and pathway events; their culmination; their direct and indirect consequences (including in several cases, unintended flow-on effects, such as the death of rescuers); the aftermath of the disasters; and, in summary, the lessons learnt.
Disaster pathways
From this analysis based on enquiry and associated reports, we plotted a common disaster pathway. Disaster pathways are the route along which cascading events can progress from relative stability to full-blown disasters (figure 1).
While all systems have the potential for disasters, the HRT theory states that those systems with optimum conditions can be more stable and less accident prone. These systems operate in such a way as to be able to pick up potential disasters early. This did not happen in the exemplar cases, at least not in the circumstances of the particular disaster for which they have become infamous. It was an inability to pick up on near misses as warning signs that caused the Inquiry into the Westray Mine, for example, to call the events leading up to the explosion “a predictable path to disaster”.25 In figure 1, based on our review of the six exemplar disasters, we schematically outline a generic disaster pathway. It is our hypothesis that prediction should open the system to prevention or at least protective correction.
The generic disaster pathway model illustrates the disaster and alternative pathways. Events are measured against their impact (the severity, y axis) and time (the diachronic, x axis). The systems up to the point of the trigger event are considered by participants to be within control (ie, deemed to be relatively stable). Of course, such systems have embedded within them the latent conditions (involving human factors, technology and unforeseen circumstances) that could lead to a disaster, as well as those exhibiting correctable, small-scale precursor events and near misses, as predicted by NAT. Following the trigger or causation event or events, two pathways are possible. If it is caught immediately, or does not accelerate towards disaster, the pathway can proceed as normal (eg, the Concorde carries on flying, the ship leaves harbour and the mining operations continue).
Latent conditions (eg, an open bow door) may be correctable up to a certain decision point. Once a critical juncture is reached, however, the time between that point and the disaster may be very short. At that tipping point, if the triggering events are not caught in time, the pathway proceeds with increasing severity to culmination; however, this is not linear and factors could manifest to arrest the momentum. Four possible pathways emerge, which are dependent on the characteristics of the organisation. In the best-case scenario, the situation is corrected, the crisis abates and the situation improves rapidly (technicians note the increase in temperature in the reactor and shut it down immediately). In the second case, the situation improves haphazardly or stabilises but does not proceed to disaster (the pilot takes preventive measures and the plane lands off the runway but with limited or no loss of life). In the third case, the situation continues at the crisis plateau until the disaster ends (the train derails or the shuttle explodes). In the final case, the situation continues to increase in severity and new problems arise (the nuclear particles escape into the atmosphere and affect the surrounding countryside, or the rescue team inhales asbestos from the plane wreckage). It is important to note that at individual points along any of these four trajectories, depending on the sociotechnical nature of the disaster, it is possible for the situation to improve, stabilise or worsen.
Once the disaster has culminated, in principle the process of learning from the event can begin to occur. Figure 1 also shows a generic enquiry process.26 These types of process can include everything from trauma counselling to a post-event debrief, a formal root-cause analysis, to a full-scale public enquiry, such as a Royal or Presidential Commission. Often, multiple activities with differing purposes and terms of reference can be initiated depending on the scale of the disaster and the needs for catharsis, healing, learning or punishment.27 This general pathway theoretically encapsulates any type of disaster process across many industries. It remains to be seen whether knowledge, and application, of this generic model could successfully moderate what some might consider inevitable but others have labelled perhaps, more appropriately, predictable.13 25
Implication for healthcare: turning bad stars into guiding lights
The generic model presented in figure 1 facilitates the extrapolation of possible severity trajectories over time, acknowledging the importance of systems and events before and after the disaster pathway. The model highlights how, after the culmination of a disaster, four possible pathways may still continue until the end of the disaster: rapid improvement, haphazard improvement, plateau and accelerated severity, resulting in new problems. The model can be applied to large-scale breaches of patient safety or individual cases of error. Disasters such as the Björk–Shiley convexo–concave prosthetic health valves case,28 in which defective valves were inserted in patients around the world, or the death of teenager Vanessa Anderson at the Royal North Shore Hospital, Sydney, Australia,29 from a series of unfortunate errors, followed a similar trajectory, and readily map to figure 1.
A review of international patient safety enquiries supports the proposition of common elements between disasters in healthcare26 and those in other industries. The elements we propose, including latent conditions, triggering events, exacerbating factors and culmination pathways, seem ubiquitous in patient safety enquiries around the world.26 At the Bristol Royal Infirmary, for example, latent conditions included a shortage of paediatric cardiac staff working in a low-volume specialist unit stretched across two physical sites, functioning for an extended period without a specialist paediatric cardiac surgeon.30 Once latent conditions such as these are established, any number of triggering causation events can ignite a full disaster. Events as seemingly trivial as the employment of a single individual31 or the discharge of a patient,32 or as broad as the outbreak of hospital-acquired infections33 or an inappropriate procedure for the acquisition of blood products,34 have all set in motion healthcare disasters.
Escalation factors are equally discernible in healthcare. Poor communication and a lack of teamwork are one of the hallmark findings of most patient safety and quality enquiries, as is a lack of adequate monitoring processes.26 35 As risk factors accumulate, are not noticed or addressed, the healthcare disaster trajectory steepens. The length of this trajectory varies: concerns were raised about the quality of care at Bristol for close to a decade.36 This escalation serves to highlight an important element in our model. Recognition of risk, such as suboptimal care, is not necessarily a trigger for adequate corrective action. Escalation pathways in healthcare can include long plateau of normalised deviance before a dramatic event leads to the culmination of disaster. During the escalation period at Bristol, clinicians knew that their performance was poor (double the average mortality for other units by 1995); however, they continued to operate.30 Even earlier, by 1992, an official in the UK Department of Health was aware of the situation at Bristol but did not act on the information. Similar situations of known risk and near misses being minimised or ignored occurred in cases in Slovenia, Canada, Australia and the UK.26 Furthermore, those individuals who did identify the risks, and went “public”, were exposed to significant retribution, as occurred in the Bristol and Challenger disasters.36 37
In healthcare, too, the culmination event leads to the various consequence pathways. In the case of blood product contamination in the UK, a slow response from authorities resulted in the continuation of the crisis and ultimately led to the death of 2000 people with haemophilia; moreover, 4670 patients acquired hepatitis C, of whom 1243 were also infected with HIV.34 The Björk–Shiley convexo–concave prosthetic heart valves situation stabilised once recipients of the faulty heart valve were identified and once the valve was removed. Where individuals could not be identified, the situation plateaued, with the potential for harm continuing.28 In Bristol, the figurative culmination of the disaster was the death on the operating table of Joshua Loveday, on 12 January 1995. It was only after Joshua's death that virtually all paediatric cardiac surgery ceased, 30 the disaster abated and the public review process commenced.
Here, the final element of our model emerges. Public enquiries are not the only response to disasters, but they occur frequently. They are not merely functional. They are an important tool in regaining public trust; a sense-making process; a way to open up catharsis; a site for professional, organisational and public learning; and a symbol of closure.27 38 Additional work will be required to expand on the postdisaster aspects of the model, particularly in understanding how enquiries are triggered and how their recommendations might act to mitigate the probability of future disasters.
Conclusion
In this article, we have shown how, while the specific technologies involved or technical engineering issues may differ in disaster pathways, many of the procedural, human and communicative characteristics are similar. Small, incremental, seemingly unimportant or minor choices or events across time move the disaster pathway upwards, increasing its potential or actual severity. Early warning signs, including trivial mishaps and lack of competence, appear in the stable system and early in the disaster pathway. These signs, and the individuals who raise them, are generally ignored or marginalised. Written or verbal communication is inadequate, as is teamwork. As a result, individuals operate on the basis of previous experience or assumptions, often ignoring their own intuitive concerns about pending disasters, and sometimes making decisions beyond their capabilities or authorisation. Predictive information that could act to halt or reduce the disaster is minimised, ignored, hidden, transmitted to the wrong people, disseminated at inopportune times or communicated in the wrong way. Where procedures including quality and safety protocols exist, they are unknown, or circumvented, sometimes repeatedly. Normalised deviance becomes part of the culture, and the error pathway traverses even the most stringent of systems barriers.
Further work will be needed to simulate different medical or health system error characteristics using this model and to map them in a much more detailed way to a generic pathway such as this. The ultimate test will be to answer the question, how can we turn bad stars into guiding lights for the health sector?
Footnotes
Funding Is provided in part by National Health and Medical Research Council Program Grant 568612 and by the Australian Research Council's Linkage Project funding scheme (project number LP0775514).
Competing Interest None.
Provenance and peer review Not commissioned; externally peer reviewed.