Article Text

Download PDFPDF

Persistence of unsafe practice in everyday work: an exploration of organizational and psychological factors constraining safety in the operating room
  1. S Espin1,
  2. L Lingard2,
  3. G R Baker3,
  4. G Regehr4
  1. 1Faculty of Community Services, Ryerson University, 350 Victoria Street, Toronto, Canada
  2. 2Wilson Centre for Research in Education and Department of Paediatrics, University of Toronto, Toronto, Canada
  3. 3Department of Health Policy, Management, and Evaluation, University of Toronto, Toronto, Canada
  4. 4Wilson Centre for Research in Education and Department of Surgery, University of Toronto, Toronto, Canada
  1. Correspondence to:
 S Espin RN, PhD
 Associate Professor, Faculty of Community Services, Ryerson University, 350 Victoria Street, Toronto, Ontario, Canada M5B 2K3; sespin{at}ryerson.ca

Abstract

This paper explores the factors that influence the persistence of unsafe practice in an interprofessional team setting in health care, towards the development of a descriptive theoretical model for analyzing problematic practice routines. Using data collected during a mixed method interview study of 28 members of an operating room team, participants’ approaches to unsafe practice were analyzed using the following three theoretical models from organizational and cognitive psychology: Reason’s theory of “vulnerable system syndrome”, Tucker and Edmondson’s concept of first and second order problem solving, and Amalberti’s model of practice migration. These three theoretical approaches provide a critical insight into key trends in the interview data, including team members’ definition of error as the breaching of standards of practice, nurses’ sense of scope of practice as a constraint on their reporting behaviours, and participants’ reports of the forces influencing tacit agreements to work around safety regulations. However, the relational factors underlying unsafe practice routines are poorly accounted for in these theoretical approaches. Incorporating an additional theoretical construct such as “relational coordination” to account for the emotional human features of team practice would provide a more comprehensive theoretical approach for use in exploring unsafe practice routines and the forces that sustain them in healthcare team settings.

  • patient safety
  • organisational factors
  • teamwork

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

The “patient safety movement” is no longer news to most patients and healthcare professionals. Five years ago the Institute of Medicine report served as a springboard for a public and scientific flurry of activity directed towards improving the reliability of healthcare systems.1,2 Research has advanced our understanding on many fronts, including the characteristics of effective incident reporting systems,3–8 the causes of medication error,9,10 and the ethics of error disclosure.11,12 Many hospitals have developed administrative structures for making safety a visible concern through patient safety councils, safety officers, and electronic incident reporting and medication ordering systems. In addition, educational efforts such as root cause analysis and teamwork training are beginning to feature in health professions curricula.13,14

Notwithstanding this activity, however, there is a strong sense that health care is not moving quickly enough towards high reliability.15 In fact, there is growing impatience with the system’s immunity to change, and an intensifying effort to understand and challenge it. For instance, a recent paper by Amalberti et al16 argues that the historical and cultural emphasis on individualism and autonomy in health care, its economic drive for productivity, and structural elements such as chronic staff shortages must be overcome if rapid progress is to be made towards ultra safe health care. They warn that, to achieve progress, we will need to identify and adapt closely held values and traditions that enforce the status quo.

Understanding the persistence of unsafe practice requires more than the review of broader social and cultural factors. What is needed is a careful examination of the impact of these issues in healthcare “microsystems” where small groups of practitioners and patterns of practice create the context for improving safety. Our research takes place in the operating room (OR) setting where errors are frequent and consequential. In the three North American studies on the incidence of adverse events, the largest number of resulted from treatment provided in the OR.17–19 The proportion of operative adverse events was remarkably stable between the three studies and comprised about half of all adverse events. The attitudinal climate of the OR has also been called into question in survey research comparing attitudes of workers in health care and aviation. A survey by Sexton et al20 suggests that lack of teamwork within and between disciplines may be a key factor in surgical error, and Gaba et al21 found that the safety climate was worse in hospitals than in aviation, and worst in OR and emergency settings. Such research suggests that the OR is a domain in which improved safety is an urgent and significant challenge, and a critical first step is to understand the factors that perpetuate unsafe practice here.

Towards this end, we interviewed surgeons, nurses and anesthesiologists, providing them with scenarios in which something goes wrong during the team’s work and asking for their impressions of whether these events constituted an error, what factors they thought contributed to the errors, and if they thought reporting the event was important. We sought to probe the factors influencing whether team members saw such events in everyday work as problematic or whether they rationalized such occurrences to support the status quo.

THEORETICAL FRAMEWORK

Our analysis draws on three concepts from organizational and psychological theory to explore team members’ responses to these error scenarios. The first is the “vulnerable system syndrome” which Reason et al22 describe as a cluster of “organizational pathologies” that interact to make some systems more liable to unsafe practice and adverse events. These pathologies—blame, denial, and pursuit of financial rather than safety excellence—are perpetuated by single loop learning which targets individual error makers at the sharp end and fails to question core beliefs, recognize systemic causes, or invoke global reforms.

The second concept we draw on in our analysis is Tucker and Edmondson’s23 concept of first order problem solving, drawn from Argyris and Schon’s24 theory of single loop learning which they use to explain why hospital workers respond to problems with a “quick fix” that supports short term productivity but avoids addressing the underlying cause and allows problems to continuously recirculate through the system. Tucker and Edmondson23 argue that three factors promote first order problem solving and inhibit organizational learning from failures: unit efficiency, individual vigilance, and empowerment.

Finally, we apply Amalberti’s theory of “migration to the boundaries” in which he models the forces that prompt workers to purposefully work outside established zones of safe practice.25 His migration model provides a vehicle for understanding why competent, well meaning healthcare providers may recurrently work around regulations that exist to safeguard care. Migration is prompted by the balance of two interacting dynamics—the system’s drive to maximum performance and the worker’s search for individual benefits.

These theoretical concepts combine to provide a powerful set of lenses for examining the organizational and psychological factors that perpetuate unsafe practice. They allow us to view OR team perception from team and organizational perspectives and to explore the ways in which individual attitudes towards everyday error may be shaped by unit level conditions and professional culture.

RESEARCH BASE

In this paper we summarize findings from an interview study of team members’ perceptions of error definition and error reporting. We analyzed qualitative data from 28 individual interviews of OR team members (9 surgeons, 9 nurses, 10 anesthesiologists) at two teaching hospitals in order to explore the key organizational factors underlying their approaches to unsafe practices in everyday work. Team members had an average of 10 years experience (range 1–25 years). Surgeons and anesthesiologists were predominantly male (89%) and all OR nurses were female. The interviews were semi-structured and built around four hypothetical general surgery scenarios depicting team based incidents in the perioperative period: (1) a retained sponge following abdominal surgery; (2) administration of cephazolin for injection (Ancef; GlaxoSmithKline, NC, USA) to a penicillin allergic patient; (3) dropping and incorrect marking of a surgical breast specimen; and (4) a burned bile duct necessitating conversion from a minimally invasive surgery to an open procedure. These incidents were selected to provide a range of team situations and participants. We also sought to include a range of incidents representing everyday mundane unsafe practices such as the decision in the Ancef scenario not to check the patient’s chart because the healthcare worker was busy and considered allergy to be unlikely, or the diverted attention in a multi-tasking teaching situation which leads to the burned bile duct. A grounded theory approach to analysis involved an iterative process in which three individuals read transcripts and developed categories reflective of emergent themes.26 A detailed description of the study methods can be found elsewhere.27

The team setting provides a rich context for investigating the persistence of unsafe practice as a social and organizational phenomenon. Unsafe practice at the systemic level—such as the everyday “work around” of the standards for checking patients’ allergies—requires tacit agreement among the team to persist with the status quo rather than “rocking the boat” in the service of a shared goal such as finishing the day’s OR patient list. We expected that, by discussing such instances with members of the interprofessional team, we could uncover the global organizational factors that support this tacit agreement and understand how these factors work and how they might be addressed in our efforts towards improved safety.

RESULTS AND DISCUSSION

Our analysis of participants’ discussions of the scenarios provides insight into three factors that we argue assist the perpetuation of unsafe practice: (1) the definition of error as an individual breach of standards of practice; (2) the role of “scope of practice” in nurse reporting preferences; and (3) the factors underlying the migration of practice across the safety boundary.

Standards of practice and “vulnerable system syndrome”

Changing unsafe practice relies in part on the perception of individuals that errors are occurring in their workplace. In our interviews we therefore asked participants if a scenario was an error and to describe their reasoning. In their responses, participants from all three professions emphasized the breaching of standards of practice as the dominant element in their definition of error.27 When standards were breached, an unsafe practice was seen as an error. For example, as one nurse asserted of the Ancef case: “It’s an error because nobody checked the patient, they didn’t follow the standard here”. A surgeon agreed: “It’s not within the regular standard of acceptable practice”. By contrast, when no explicit standard exists to be breached, the unsafe practice is seen fatalistically as an unavoidable accident. For instance, one nurse referred to the dropped specimen event as “an unintentional slip”, whereas an anesthesiologist described the same event as “an unfortunate turn of events”. Overall, only 25% of participants perceived the dropping to be an error. By contrast, all the participants described the retained sponge and administration of Ancef to an allergic patient as errors, reasoning that clear professional standards exist to regulate these practices and they were not followed. Similarly, the burned bile duct was seen by 93% of participants as an error, due to the surgeon’s failure to follow the implicit professional standard of “keeping your eyes on the surgical field”.

The reference to “standards of practice” as a way of approaching error presents a powerful example of Reason’s “vulnerable system syndrome” because it demonstrates our participants’ focus on individual breaches and their lack of attention to systemic error provoking weaknesses. For instance, participants blamed the retained sponge error on the failure of nurses and surgeons to follow the standards for checking and double checking the sponge count and the operative area before closing. The count standards are in place to govern individual practice and our participants used them to invoke the cause of the error as an individual failure of the nurse and the surgeon to act in a manner compliant with the standards. This approach represents a linear causal logic that deflects attention from how a network of circumstances constrains individual actions. It also ignores how standards exist in a system in which their application often conflicts with other processes and values. For example, the pressure to maintain OR schedules or complete other tasks may conflict with the desire of nurses and surgeons to ensure accurate sponge counts. As one surgeon commented: “You’re behind schedule and you don’t pay attention to these fine details”. It is important to recognize that, while this surgeon’s comment might easily be interpreted as the individual “getting sloppy” because he is falling behind, it is more likely the case that this individual is describing a natural human response to several pressures that implicitly prioritize efficiency over ultra safe practice: if the next case is cancelled because of time pressures, the next patient will be inconvenienced and potentially placed at greater risk for the delay, the hospital will make less money, and the surgeon will be seen as less efficient. In the bustle of everyday activity, each of these real consequences of delay can easily overshadow the very low probability, highly theoretical possibility that a sponge will be left in this patient here and now, and therefore act to inhibit the propensity towards highly redundant, low yield activities such as sponge counts.

Looking beyond the immediate actions in the retained sponge event to the basic assumptions and conditions that gave rise to them helps to identify a network of factors that call into question the role of individual error or breached standards, in light of important but latent system factors. For instance, lack of role clarity is often an issue on the interprofessional team, and this event may be shaped by this issue. Our participants asserted that both nurses and surgeons had responsibility for checking sponges, and that both were in error in this scenario. In fact, formal standards of practice for checking exist only for nursing.28 Surgery, on the other hand, is guided by implicit rules for checking, and this is a lower priority for them among their other tasks. As one surgeon commented: “Ensuring that all bleeding has been stopped is a more pressing concern [for surgery] than sponge counts”. Similarly, team relations may contribute to the unsafe practice. If sponge checking is low priority for the surgeon, then the redundant system where members of two professions check sponge counts may be present only when the nurse feels comfortable with prompting the surgical check. Two other factors worth considering are time pressures and multi-tasking. OR teams are under significant pressure to complete cases expeditiously to ensure that all patients on the daily OR list are cared for. The pressure of time has been described as a fundamental source of OR team tension.29 Time pressures lead to multi-tasking surgeons who often rush off to complete other duties between cases. Although not well documented in the literature, it seems very likely that pressures to work more quickly are highly relevant to failures to observe standards of practice.

Consideration of such factors and their impact on compliance with standards of practice calls into question the underlying assumption of “free will” which presumes that OR team members are always in an unconflicted position to choose to observe standards, so that non-compliance is conscious and blameworthy. For example, one study participant noted: “He [surgeon] should have checked. It’s an error”. The fact that our participants relied so pervasively on breached standards in their definition of whether an event was an error imposes two important barriers on improving safety in this domain. Firstly, unsafe practices such as removing a specimen from its context before marking it may be “excused” because no official standard or routine exists to prevent them. Secondly, and more importantly, the focus on the individual’s breach of standards and the absence of reflection on circumstances that constrain compliance means that responses to error are likely to target individual remediation rather than system-wide learning or redesign of work practices. Since, as argued above, the individual’s response is more likely than not a natural (perhaps even reasonable) human response to a complicated and competing set of systemic demands, addressing the individual response rather than the systemic demands does nothing to move a vulnerable system towards improved reliability. As Reason et al22 argue, it is far easier to fix situations than to change people, and this is the only way to achieve institutional resilience in health care.

Scope of practice and first order problem solving in nurse reporting preferences

A popular element in patient safety efforts has been the implementation of incident reporting systems. Such systems in other high reliability organizations (such as aviation) have been found to promote organizational learning from near misses and errors and, as a consequence, to facilitate the development of a safer system. Of course, the effectiveness of reporting systems depends upon the attitudes and values of those charged with using them.30 Traditionally, nurses have been the group expected to report incidents and therefore we were interested in the attitudes of nurses in our study toward error reporting.

When nurses in our study identified scenarios as errors, they advocated some sort of reporting for only 61% of these errors and formal reporting through the incident reporting system for only 45% of these reports.31 For the remaining 55% of instances of errors that they would report, nurses advocated more informal methods of reporting including a “heads up in the coffee room” or inclusion in their nursing notes. Their dominant rationale for their reporting preferences was the concept of “scope of practice”. Nurses explained that they would report errors in nursing practice (such as a retained sponge) but not errors in the practice of other professionals on the team (such as the burned bile duct) because such practice was outside their scope of practice and expertise. As one nurse explained: “He made the error, it’s the surgeon’s responsibility to report”. Overall, scope of practice appears to function as a constraint on reporting such that, in a team setting, if nurses are expected to be the “reporters and recorders” but they are not comfortable reporting on others’ practice, then many team based errors will go unreported. Lack of reporting or informal reporting means lack of organizational learning, such that these unsafe practices are likely to persist. At best, informal reporting will lead to localized knowledge by other nurses in direct contact with the individual who observed the initial event.

Tucker and Edmondson’s model23 of first order problem solving provides a useful lens for exploring the constrained reporting practices suggested by our study data. They explain that first order problem solving involves implementation of short term fixes for problems that arise in the work process, in contrast to second order problem solving which seeks out underlying causes and informs those responsible. While first order problem solving moves the daily work along, it is counterproductive in that it “keeps communication of problems isolated so that they do not surface as learning opportunities”. Tucker and Edmondson’s model23 articulates the cultural values and organizational factors that make first order problem solving deceptively attractive. They explain that, due to the norm of individual vigilance, first order problem solvers are well liked in the system because they work through their own problems independently, they do not get involved in other people’s problems, and they do not make waves by questioning system values and processes that underlie everyday problems. Furthermore, first order problem solving occurs in an organizational setting of “empowerment” in which the reduction of management oversight limits the perspective, status, and connections necessary for second order problem solving to be engaged in by individual workers.

The preponderance of nurses towards informal reporting of errors in our study exhibits similar limitations to those identified with first order problem solving by nurses in Tucker and Edmondson’s observational study.23 For instance, when OR nurses decide to report informally among their closest colleagues problems such as unknown allergies, unsigned consent forms, or equipment problems, these issues are less likely to be addressed at the system level and more likely to recur. Furthermore, the value of individual vigilance that enforces first order problem solving is readily evident in the tendency of our nurse participants to approach the professions as individual silos responsible for individual vigilance in the event of an error. Nurses in our study evinced strong reluctance to judge errors that they perceived originated with other team members, and described an interprofessional etiquette of “knowing one’s place”: “It’s a tricky thing. You’re overstepping … it’s not your discipline of practice; it’s the surgeon’s responsibility”.

Finally, the interprofessional team setting is one in which lack of broad oversight and understanding of individual functions is a core problem. As Tucker and Edmondson23 argue, “empowerment” or the lack of management oversight and assistance contributes to the inability of individual nurses to invoke second order problem solving because they may lack both a sense of where one problem fits in the bigger picture and an authority to communicate across boundaries to promote change. In the OR team, team members often do not fully understand where everyone’s work fits into the whole process, and OR managers are in ever reducing supply as hospitals drive towards increased productivity and decreased expenditures. Furthermore, in our study hospitals, nurses did not tend to work consistently with the same surgeon and team, and their rotating assignment may further reduce understanding of system processes as they relate to particular services and procedures. Such management and staffing issues may further explain nurses’ conservative stance of formally reporting events only within their scope of practice and, consequently, failing to promote second order problem solving at the system level.

Factors underlying migration to the boundary of safe practice

The richest portion of our interview discussion surrounded participants’ explanations of how the unsafe practices in the study scenarios could arise. All participants confirmed that these were events that happened not infrequently in their everyday practice, and they were able to reflectively outline the forces that might have combined to produce the situation represented in each scenario. To take the Ancef scenario as an example, participants recurrently described the circumstances that give rise to what, in essence, is the anesthesiologist’s and nurse’s tacit agreement to work around the regulation of checking patient allergies before administering medication. The reasons for such tactics can be seen as falling into four categories identified by Amalberti:25 external forces, individual concerns, safety regulations, and safety nets.

Participants perceived a range of external forces influencing the unsafe practice represented in the Ancef scenario. The most dominant of these was what Amalberti25 refers to as “market pressures” which, in the OR, manifest as the pressure to do as many surgical cases in a day as possible. As one anesthesiologist suggested: “Time constraints and fiscal constraints … add pressures of time conservation and quick movement in the OR.”

For surgery and anesthesia, the need to do more with less has produced a strong cultural acceptance of multi-tasking in the teaching hospital. Staff anesthesiologists often cover two ORs simultaneously, supervising training anesthesiologists who are dedicated to each room, and surgeons often arrive to the OR late and leave early to attend to other duties such as outpatient procedural clinics or committee responsibilities while surgical residents open and close the incision. Such multi-tasking is a behavioural response to market pressures which, over time, can migrate everyday practice beyond the safety boundary. Reasoning about the conditions underlying the Ancef error, a surgeon emphasized: “It’s the hurried circumstances day to day, and everybody is busy”, while a nurse judged that: “The speed of trying to get the case started contributed to this.”

However, market pressures are only one of the dominant forces in the migration model of Amalberti,25 which is built on the premise that work practices will inevitably migrate in response to the conflicting pressures of maximum performance (system’s output) and individual benefit (worker’s quality of life or ease of work). The influence of individual workers trying to ease their own workload or achieve their own goals was another recurrent theme in our study. Participants acknowledged that “individuals are concerned with their own tasks” and “there is not really a team” in instances such as the Ancef case. Here again, role clarity is a likely factor in the tendency of individual workers to exclude the Ancef responsibility from their own workload as part of an overall strategy to limit and complete their own task list. Preoperative antibiotic administration presents a rather unique interprofessional situation in which one physician (the surgeon) orders a medication and another physician (the anesthesiologist) administers it. Traditionally, in terms of professional roles, physicians order medications and either administer them themselves or, depending on the medication, nurses may administer them on their behalf. In the case of preoperative antibiotics the roles are blurred. The anesthesiologist has not ordered the medication and some believe strongly that surgeons should be responsible for administering it.32 Further, the anesthesiologist’s primary goals in the preoperative period are the insertion of necessary lines (IV, arterial, CVP) and the administration of anesthesia medications (epidural, general anesthetic), pushing both the check for penicillin allergy and, often, the administration of the antibiotic far down their individual list. In effect, the administration of antibiotics is not a priority for anesthesiologists, surgeons, or nurses.

The drive towards individual benefits and maximum performance in response to market pressures is often in conflict with safety regulations. The Ancef case demonstrates the rationalization process that such a conflict can produce in workers. For instance, while anesthesiology participants acknowledged their disciplinary guidelines regulating pre-anesthetic assessment for drug allergies, the combined pressures of time and individual workload created “a feeling of not needing to follow the rules” in this scenario. What is fascinating in our data is the link made by participants between safety nets and the choice to break the rules. As one surgeon remarked: “Measures are instituted to prevent error and individuals … depend on those measures to correct for their oversight, so they don’t even make the effort anymore”. Often, participants referred to the team itself as a safety net in terms of the redundancy of “more than one set of eyes observing and responsible for the actions of each team member”. But, as one nurse commented, this can lead to “one person presuming the other person did the [allergy] assessment”. In social psychology this is referred to as the “diffusion of responsibility”33—that is, if there are several people witnessing a situation, there is a lower likelihood that it will be addressed because everyone presumes someone else will do it.

Ironically, while safety nets are in place to prevent violations and accidents, they appear to promote migration and unsafe practice. Individuals assume that someone on the team is observing regulations, so their individual practice can move towards maximum performance and individual benefits. If, as one anesthesiologist pointed out, “such assumptions are wrong”, then the whole team has unknowingly migrated across the safety boundary. As one surgeon summarized: “Everybody is busy looking after their own tasks and no one takes on the prime responsibility of determining whether the patient had an allergy or not”. Redundancy is meant to enhance safety, but when team members focus on their own tasks rather than the team goals, overlapping and joint responsibilities become no one’s critical job.

As migration occurs, a tension is created between the system’s increased output and its decreased ability to respond appropriately in case of error. Amalberti25 argues that the real or “operational” space of action (everyday work practice) will always drift beyond the safe space of practice as dictated by system design and regulations and, echoing Rasmussen,34 he asserts that the path to safety is to make boundaries explicit, to expect workers to practice at or beyond them, to understand what influences this migration, and to help workers develop coping skills to practice at the boundaries. Our participants’ discussions suggest that one factor encouraging migration is the perception of the team itself as a safety net, the assumption that this encourages that others are observing safe practice, and the invisibility of team migration in this setting until an adverse event occurs. We would contend that these assumptions will need to be addressed as part of any program to improve the team’s coping skills at the boundary of safety.

CONCLUSIONS

Our findings suggest that unsafe practice persists because it is a functional response to psychological factors, professional values, and organizational pressures. A multifaceted theoretical approach helps to elucidate a set of overlapping factors that combine to perpetuate the status quo. Improved understanding of these factors is the first step in challenging the persistence of unsafe practice.

The tendency of team members to define error as an individual breach of standards reflects a linear logic in which the team member is seen to have purposefully and willfully chosen not to obey the rules. As Reason suggests, this is a characteristic of “vulnerable system syndrome” in which immediate causes of individual action are examined to the exclusion of broader questions of underlying conditions that produce error. This tendency extends to nurses’ reporting preferences which are tightly constrained by their sense of scope of practice. By considering the parallel situation of first order problem solving theorized by Tucker and Edmondson,23 we can see that this sense of constraint is reflective of strongly held organizational values regarding individual vigilance, as well as the translation of such values in the interprofessional team setting into turf etiquette. Finally, the recurrent practice of working around standards can be further understood with reference to Amalberti’s migration model25 which draws attention to the conflicting pressures of individual benefit and maximum system performance. This conflict is resolved in part by the perception of the team as a safety net, such that individuals can invoke the redundancy of the team system and avoid regulations because they assume another team member will have followed them. However, where teams fail to establish a clear understanding of joint responsibilities and active cross checking, key activities may “fall through the cracks” and escape attention.

The three phenomena we have described are mutually reinforcing. Nurses’ carefully circumscribed definition of their turf, and surgeons and anesthesiologists’ division of labor, undermine the identification of safety issues and solutions which require contributions from all these professions. Nurses and others react by focusing on the current problems, addressing issues as they emerge. Such tactics are reinforced by the larger culture which sees error as a product of individual failure to follow standards, and ignores conflicting priorities which have not been explicitly addressed in those standards. Changing these mutually reinforcing attitudes and behaviours probably cannot be done either at the team level or organizational level alone, but rather requires a concerted attempt to focus on better identification of issues and joint problem solving in the OR, coupled with a clearer view of what constitutes effective practice and what issues or events need to be reported, so that consistent systemic solutions can be identified to improve results.

Consideration of these three complementary theoretical approaches provides a broader perspective for analyzing critical incidents and also for identifying problematic practice routines in everyday work and, in our particular case, teamwork. Drawing on Reason et al’s attention to organizational pathologies,22 Tucker and Edmondson’s insights into the self-perpetuating cycle of “single loop” problem solving,23 and Amalberti’s focus on the forces that encourage migration,25 two central questions can be asked of such practice routines:

  1. Why and how do such problematic practice routines get established? What individual, systemic, or cultural conditions support their continuation?

  2. Under what circumstances do these accepted practices get called into question? What are the useful strategies for exposing such “taken for granted” ways of operating?

Key messages

  • The factors influencing the persistence of unsafe practice were explored in an operating room team and a preliminary descriptive theoretical model for analyzing problematic practice routines was developed.

  • Using data collected during a mixed method interview study, participants’ approaches to unsafe practice were analyzed using three theoretical models: Reason’s theory of “vulnerable system syndrome”, Tucker and Edmondson’s concept of first and second order problem solving, and Amalberti’s model of practice migration.

  • These three theoretical approaches provide a critical insight into key trends including definition of error as the breaching of standards of practice, nurses’ sense of scope of practice as a constraint on their reporting behaviours, and reports of the forces influencing tacit agreements to work around safety regulations.

  • The relational factors underlying unsafe practice routines are poorly accounted for in these theoretical approaches.

  • Incorporating an additional theoretical construct such as “relational coordination” to account for the emotional human features of team practice would provide a more comprehensive theoretical approach for use in exploring unsafe practice routines and the forces that sustain them in healthcare team settings.

Identifying such “taken for granted” practices is difficult. Staff engaged in their work are unlikely to calculate the efficiency or danger of specific practices. Instead, they make more fine grained decisions about getting their work done within a consensually defined set of accepted work practices. Yet this raises some important questions. We would contend that practice change in the pressured context of healthcare requires an understanding of the ways in which “unsafe” practices are perceived and the conditions that support their persistence. Does a tacit workaround of existing safety regulations reduce the likelihood of another 15 hour day for the surgeon? Does it increase the likelihood of caring for all patients scheduled for surgery that day? Does it minimize conflict between nurses and other OR team members? Efforts towards system change will need to acknowledge such “functionalities” of unsafe practice for their uptake to be widespread and sustained.

This last example of social tensions on the team points out a weakness in this descriptive model—namely, its inability to fully account for the role of human social relationships in the everyday persistence of unsafe practice. The theories we have employed emphasize individual cognition and organizational culture rather than the dimension of interaction that lies between these—the everyday human contact and social formation of friendships and conflicts that occur when individuals work in groups. Recent work by Gittell35 provides a model for capturing this characteristic through the concept of “relational coordination” which theorizes the web of relationships that underlie team behaviour and distinguishes strong team relationships from weak ones. For instance, her concept would strengthen our descriptive theoretical approach to problematic practice routines by its ability to account for why the nurse may be more willing to raise issues of concern with a surgeon whom she trusts, or why the surgeon may ask his regular circulating nurse to remind him to do a final check. While the “equivalent actor” argument reminds us that reliance on relationships is not the key to safety,16 we would contend that a descriptive theoretical model needs to include the organizational, psychological, and relational components in order to facilitate a measured exploration of the persistence of unsafe practice in teamwork.

REFERENCES

Footnotes

  • Funding: none.

  • Competing interests: none declared.