Elsevier

Safety Science

Volume 37, Issues 2–3, March 2001, Pages 109-126
Safety Science

The paradoxes of almost totally safe transportation systems

https://doi.org/10.1016/S0925-7535(00)00045-XGet rights and content

Abstract

Safety remains driven by a simple principle: complete elimination of technical breakdowns and human errors. This article tries to put this common sense approach back into perspective in the case of ultra-safe systems, where the safety record reaches the mythical barrier of one disastrous accident per 10 million events (10−7). Three messages are delivered: (1) the solutions aimed at improving safety depend on the global safety level of the system. When safety improves, the solutions used to improve the safety record should not be further optimised; they must continue to be implemented at present level (to maintain the safety health obtained), and supplemented further by new solutions (addition rather than optimisation rationale); (2) the maintenance and linear optimisation of solutions having dwindling effectiveness can result in a series of paradoxes eventually replacing the system at risk and jeopardising the safety record obtained in the first place; and (3) after quickly reviewing ambiguities in the definition of human error and the development of research in this area, this article shows, through recent industrial examples and surveys, that errors play an essential role in the acquisition and effectiveness of safety, at individual as well as collective levels. A truly ecological theory of human error is developed. Theories of error highlight the negative effects of an over-extensive linear extrapolation of protection measures. Similarly, it is argued that accepting the limitation of technical systems performance through the presence of a minimum breakdown and incident ‘noise’ could enhance safety by limiting the risks accepted. New research opportunities are outlined at the end of this paper, notably in the framework of systems now safe or ultra-safe.

Section snippets

Introduction: an implicit and general-purpose safety model

In most industries, safety remains governed by a few simple and self-explicit principles:

  • 1.

    Conceptual designs generate systems with a high theoretical performance and safety potential, subject to technical and human failings. Breakdowns and human errors jeopardise operational and optimal safety, acting as ‘noise’ disturbing operations; ideally they should be totally eliminated or at least minimised. Conceptually, breakdowns and errors are symmetrically assessed, and detection logic, calculation

Safe and ultra-safe systems, and relative effectiveness of measures aimed at reducing breakdowns and human errors

Systems which are now ultra safe, i.e. where the risk of disastrous accident is below one accident per million events, behave differently safety wise than less safe systems.

They can be placed in the context of other systems:

  • 1.

    Dangerous systems, where the risk of accident is greater than one accident per 1000 events, 10−3 (e.g. bungee jumping, or mountain climbing3

Developments in research on human error

Research studies on human error are nothing new. However, many concepts were extensively reviewed during this century (see Fig. 1 for a summary). In psychophysics and behavioural psychology, scientists perceived error as an instrument helping measure the performance of experimental subjects. The study of error was not a goal per se, but just a mean to assess other psychological concepts. Threshold psychophysics were based on the delivery of the ‘good’ answer (correct perception of the

Extending the ecological safety model to incidents

Breakdowns and incidents follow the same logic as errors. An incident-free system becomes mute, and its safety can no longer be tuned. Investments stop being directed at safety and are earmarked towards improving performance; the control and the spontaneous limitation induced by the occurrence of incidents no longer play their role. The system can then brutally find itself in a situation of disastrous accident because its over-streched performance has given rise to new risks (Rasmussen, 1993).

General development model for large systems

This article started with an obvious statement: errors and breakdowns must be eliminated to increase safety. This paradigm holds true to optimise the safety of man–machine systems up to one risk of disastrous failure for 100,000 events, 1×10−5). To reach this level of effectiveness, definitions do not need to be overly clarified, nor are complex cognitive processes required to come into play. Increased procedures, improved staff training, automation, and conventional error blocking solutions

Conclusion

Ultra-safe systems have reached today's safety level through a lengthy, extensive and crisis-ridden optimisation process, during which these systems have aged and matured. However, this process has also made systems more fragile and less adaptive. These systems will eventually be replaced by others having a different safety potential; thus goes the development of technical cycles. However, the challenge for today's safety practitioners is to manage the transition from these older technologies

Disclaimer

The ideas expressed in this paper only reflect the opinion of the author and must not be considered as official views from any national or international authorities or official bodies to which the author belongs.

References (42)

  • C.-M Allwood

    Error detection processes in statistical problem solving

    Cognitive science

    (1984)
  • C Valot et al.

    Metaknowledge for time and reliability

    Reliability Engineering and Systems Safety

    (1992)
  • M Visciola et al.

    Communication patterns and errors in flight simulation

    Reliability Engineering System Safety

    (1992)
  • Abbott, K., Slotte, S., Stimson, D. (Eds.), (1996, June). The Interfaces Between Flighcrews and Modern Flight Deck...
  • R Amalberti

    La conduite des systèmes à risques

    (1996)
  • Amalberti, R., 1997. Paradoxes aux confins de la sécurité absolue. Annales Des Mines Fev97, 9–15. [Paradoxes of...
  • R Amalberti

    Automation in aviation: a human factors perspective

  • Amalberti, R., Wioland, L., 1997. Human error in aviation. invited paper to the International Aviation Safety...
  • C Argyris

    Overcoming Organisational Defenses

    (1990)
  • D Dörner

    On the difficulties people have in dealing with difficulty

    Simulation & Games

    (1980)
  • Dörner, D., 1990. The logic of failure, Phil.Trans. R. Soc. London, B327,...
  • Duncker, K., 1945. On problem-solving. Psychol. Monographs, 58 (whole no....
  • Girin, J., Grosjean, M., 1996. La transgression des règles au travail. L'harmattan, Paris. [Rules Transgression at...
  • D Green et al.

    Signal Detection Theory and Psychophysics

    (1966)
  • E Hollnagel

    Human Reliability Analysis, Context and Control

    (1993)
  • E Hollnagel

    Cognitive Reliability and Error Analysis Method, CREAM

    (1998)
  • Kemmler, R., Braun, P., Neb, H., 1998. Analysis of inflight situations and development of preventives measure. Paper...
  • D Maurino et al.

    Beyond Aviation Human Factors

    (1995)
  • NATO, 1993. Advanced Research Workshop on Human Error, Bellagio,...
  • D Norman

    Categorization of action slips

    Psychological review

    (1981)
  • Cited by (0)

    View full text