Statistics from Altmetric.com
The WHO checklist consists of three parts: (1) the sign in before anaesthesia, (2) the timeout before incision and (3) the sign out before the patient leaves the operating room. Previous studies show that the WHO checklist reduces both complications from care and the 30-day mortality rate.2 ,3 These results are supported by other studies using similar checklist methodologies.1 ,6 Initially, the evaluation focus was on the effects of checklists using measures such as complications and mortality. Recently, though, researchers have started to pay attention to the actual usage of checklists in practice by investigating compliance.7–,10 The compliance rate reported in these studies could at best be considered as moderate. Rydenfält et al8 report a compliance of the timeout part of 54%, despite timeouts being initiated in 96% of the cases studied. In the study by Cullati et al,7 the mean percentage of validated checklist items in the timeout was 50% and in the sign out 41%.
Despite previous studies showing that both complications and 30-day mortality decreased,2 ,3 this raises the question: do safety checklists used with this level of compliance really make practice safer? Could it even be that the lack of compliance actually introduces new risks not present before? In the following viewpoint, we investigate the latter question from a safety science perspective to introduce new perspectives on the usage and implementation of checklists in healthcare and outline suitable directions for future research.
The checklist as a defence against failure
The main idea with checklists such as the WHO surgical safety checklist is to serve as a defence or barrier between the danger or hazard and the patient.11
For example, if the wrong patient is sent to the operating room, the sign in should protect the patient from an improper surgical procedure at the wrong site by checking: (1) patient ID, (2) site and (3) procedure. This is repeated again during the timeout, adding redundancy. If one check fails, the redundant one is meant to identify the problem. Illustrating this, in a study of incidents intercepted by the similar SURgical PAtient Safety System (SURPASS) checklist, de Vries et al12 showed that incidents were intercepted in 40.6% of the 6313 collected checklists. To make it more complicated, some of the checklist items were certainly checked, informally or formally, before the checklist was first introduced.
As a barrier system, the WHO checklist could at best be defined as what Hollnagel calls a symbolic barrier system,13 that is, in those cases when there is a physical checklist present—for instance, posted on the operating room wall, reminding the surgical team that it should be used. In other cases, the checklist takes the form of an incorporeal barrier system, that is, in those cases when there is no checklist easily accessible and the team is dependent on memory in order to remember to go through the checklist. Compared with physical barrier systems, which are considered to have high efficiency, incorporeal and symbolic barrier systems have at most medium efficiency. When it comes to the reliability of the barrier system, symbolic barrier systems have medium to low reliability and incorporeal low reliability.13 In other words, symbolic and incorporeal barrier systems such as the WHO checklist are vulnerable and quite easily put out of function. This conclusion is supported by studies carried out on checklist compliance.7–,10 In essence, this is what non-compliance is all about: to put some of the checklists barrier functionality out of order.
In this viewpoint, we focus on the WHO checklist as a barrier. But we acknowledge that besides its direct function as a barrier against well-known safety threats, the WHO checklist is also intended to improve the culture of the surgical team by encouraging a culture of communication and teamwork.14 The team introduction for instance could be seen as a means to make sure that every team member’s name and role is known and as an icebreaker encouraging them to speak up in case they notice something unexpected.15 Studies of safety attitudes and safety culture changes in association with the introduction of surgical safety checklists indicate some improvement, though the changes so far appears to be small.16 ,17 But eventually, the timescale for such an improvement to appear needs to be longer. The introduction of a tool taken from another context, as the checklist practice in health care was adopted from aviation, is a complex matter. As Carlile points out, this type of process involves both transfer and translation of the tool, as well as transformation of the environment to which it is adapted:18 a process that potentially could take a long time spanning over multiple iterations of adaption before the tool as well as the adopting practice has been sufficiently adapted. Hence, one could not be sure if the full potential of the surgical safety checklist has been reached at this, still early, stage of usage, at least not when it comes to its impact on culture.
The dynamic aspects of safety
Failure to comply with a routine, such as a safety checklist, is often attributed to what commonly is called human error. Rasmussen argues that in sociotechnical systems, task analyses focused on human error ‘should be replaced by a model of behaviour shaping mechanisms in terms of work system constraints, boundaries of acceptable performance, and the subjective criteria guiding adaptation to change’ (p. 183).19 Hence, he advocates a shift of focus from the error to the dynamic interactions and mechanisms leading up to the error. Rasmussen illustrates this with his dynamic safety model consisting of three boundaries forming a safety envelope that constitutes the workspace within which the system can work safely (figure 1). The boundaries are: (1) the boundary to economic failure, (2) the boundary to unacceptable workload and (3) the boundary of functionally acceptable performance, or what in this research is referred to as the boundary to performance failure. The model also consists of an error margin towards performance failure, resulting in an inner perceived boundary to performance failure.19
Cook and Rasmussen point out that it normally is uncertain exactly where inside the safety envelope the system is currently operating, that is, how far it is from the boundary to performance failure; this is made explicit, of course, when an accident occurs. To avoid accidents, the organisation strives to move away from the boundary to performance failure (gradients from efforts to improve safety), creating the error margin. But at the same time, the system's operating point is affected by gradients striving to move away from economic failure and unacceptable workload. The system can be said to be in a relatively stable state when the operating point stays in roughly the same spot compared with the boundaries, even though this stability is far from definite. Rather, it is continuously recreated by the dynamic processes of the system. If the gradients away from economic failure or unacceptable workload become too strong, the system's operating point will drift towards the boundary to performance failure and hence towards higher risk.20 This could occur under conditions of extreme workload, like those of an understaffed emergency ward, or when the healthcare system is under economic pressure.
Striving towards efficiency
Under the kind of pressure described above, when the economic demands or workload simply is too much to put up with, a likely strategy is to turn to workaround strategies to increase efficiency.21 When this happens, one should not be surprised that ‘every system moves to the limit of its capability’ as Cook and Rasmussen (p. 133)20 point out referring to Hirschhorn's law of systems. Furthermore, Reason states that the gains in defences ‘… are often converted into productive, rather than protective, advantage, thus rendering the system less safe than it was before’ (p. 59).11 As Hollnagel points out in his efficiency thoroughness trade off principle, we continuously have to choose between efficiency and thoroughness; in this case, between efficiency and safety.22 In the light of these remarks, it appears as if a system always strives to be as efficient as possible. An example of this could be if checks that previously were made in a formal or informal manner of the items on the checklist are omitted as unnecessary once the checklist is introduced because they are expected to be carried out with the checklist anyway.
Normalisation of deviance
When a deviation occurs, for instance when one part of the WHO surgical safety checklist is omitted, and nothing happens, no one protests and the patient seems to be fine; the deviation easily is accepted or institutionalised. This is something commonly referred to as the normalisation of deviance.23–,25 In Rasmussen's dynamic safety model, normalisation of deviance means that the error margin is moved outward towards the boundary to performance failure. People who act in a deviant manner normally do not do so out of malevolence; in fact, they usually think they do it for the sake of the patient.23 If they do not do it directly for the patient they might fail to comply because the rules at hand are perceived as stupid.24 Nilsson et al15 show in a checklist implementation study that 47% of the surveyed participants found the team member introduction part of the checklist to probably be without importance or without importance, making it the part they perceived as the least important. This could be compared with the study by Levy et al9 showing that only 10% completed the team identification part.
Implications for patient safety
The discussion above can be summarised in the following premises about the checklist:
The checklist is a weak type of safety barrier that is easily put out of function and is vulnerable to normalisation of deviance, especially those parts that are not perceived as important to all users.
The checklist provides gains in safety but those gains are threatened from demands for efficiency, resulting in safety gains being transformed into production gains. As a result, other barriers against patient harm could be perceived as being replaced by the checklist and thus ignored in order to improve production.
On their own, the two premises do not provide any major threats to patient safety. If just the first premise is true, and it becomes routine to not comply with all the checklist items, then it would at least be no worse than it was before the checklist. Those checklist items to which compliance is good still improve patient safety. As pointed out by Amalberti et al,26 violations, that is, non-compliance, does not have to be a sign of decreased safety, but rather the contrary as additional safety rules and routines create additional opportunities for violations to occur. Hence, despite a higher incidence of violations following the introduction of new safety rules or routines, the system can still have become safer.
If just the second premise is true, this means that other safety checks, formal or informal, are being omitted because they have been replaced by the checklist. This is no problem as long as compliance with the checklist is good. In fact, practice could even turn out more efficient while still as safe as or safer than before. But when both premises are true at the same time—when compliance with the checklist is flawed and other safety checks are omitted because they are thought of as being handled by the checklist—then we have a new safety threat because we have induced a false sense of safety into the healthcare system.
Suggested future research
The identified threat of a false sense of safety places new demands on future studies. We suggest that future research should address the following topics previously not investigated in the healthcare checklist literature:
How the work dynamics of the healthcare system around the patient change when the checklist is introduced and how this affects existing barrier systems.
How compliance with different items on the checklist correlates with different types of accidents or adverse events as well as with the perceived importance of different checklist items, not just on compliance alone.
How a checklist should be designed and implemented to be perceived as meaningful by those involved in its usage, and not merely adding a barrier, but to improve the behaviour shaping mechanisms as well.
By investigating these questions we believe that it is possible to gain a better understanding of what happens to a healthcare system when a checklist is introduced and incorporated. We also believe that it would lead to a better understanding of how checklist usage both threatens and creates safety.
Contributors All authors contributed to the development of the ideas presented in this publication. CR was mainly responsible for the drafting of the manuscript with the other authors providing critical input, suggestions and feedback.
Funding The Gorthon Foundation.
Competing interests None.
Provenance and peer review Not commissioned; externally peer reviewed.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.