Elsevier

Social Science & Medicine

Volume 62, Issue 7, April 2006, Pages 1605-1615
Social Science & Medicine

Turning the medical gaze in upon itself: Root cause analysis and the investigation of clinical error

https://doi.org/10.1016/j.socscimed.2005.08.049Get rights and content

Abstract

In this paper, we discuss how a technique borrowed from defense and manufacturing is being deployed in hospitals across the industrialized world to investigate clinical errors. We open with a discussion of the levers used by policy makers to mandate that clinicians not just report errors, but also gather to investigate those errors using root cause analysis (RCA). We focus on the tensions created for clinicians as they are expected to formulate ‘systems solutions’ that go beyond blame. In addressing these matters, we present a discourse analysis of data derived during an evaluation of the NSW Health Safety Improvement Program. Data include transcripts of RCA meetings which were recorded in a local metropolitan teaching hospital. From this analysis we move back to the argument that RCA involves clinicians in ‘immaterial labour’, or the production of communication and information, and that this new labour realizes two important developments. First, because RCA is anchored in the principle of health care practitioners not just scrutinizing each other, but scrutinizing each others’ errors, RCA is a challenging task. Second, thanks to turning the clinical gaze in on the clinical observer, RCA engenders a new level of reflexivity of clinical self and of clinical practice. We conclude with asking whether this reflexivity will lock the clinical gaze into a micro-sociology of error, or whether it will enable this gaze to influence matters superordinate to the specifics of practice and the design of clinical treatments; that is, the over-arching governance and structuring of hospital care.

Introduction

In many health care organizations around the industrialized world, the issues of patient safety and quality of care have over the last decade or so captured the attention of policy makers, managers, clinicians and the public. The drive towards clinical quality and safety is the outcome of three kinds of development. First among these are the reports on iatrogenic injury. The US Harvard Medical Practice Study (Brennan et al., 1991) and the Quality in Australian Healthcare Study (Wilson et al., 1995) among others brought to people's attention a risk of preventable clinical incidents affecting between 10% and 16% of patient admissions. There has also been a wave of publications by clinical academics, as well as legal specialists and health care complaints commissioners, reporting on specific instances of clinical failure, such as those that occurred in infant surgery in a Bristol hospital (Department of Health, 2001), the obstetrics and gynaecology department in a Perth hospital in Western Australia (Douglas, Robinson, & Fahy, 2001) and various departments in Camden and Campbelltown hospitals in South Sydney (Walker, 2004).

Second, recent years have witnessed a rise in consumer participation in health care. Following legal and policy mandates, we have seen consumers joining health boards and hospital executive committees. Consumer pressure has further produced both medical consumers associations and health care complaints commissions (Pickin et al., 2002). In clinical practice, consumers have also taken on a more central role in relation to treatment decision-making, with rising emphasis on ‘informed decision-making’ and ‘patient autonomy’ (Tauber, 2001). Where 40 years ago clinicians might have had discretion over how to manage clinical failure, patients’ and carers’ involvement in health care has changed this considerably (Bosk, 2003).

The third development is that policy makers across the industrialized world are devising mechanisms through which clinicians are now not merely expected to report clinical errors, but also conduct investigations into those errors with colleagues from their organization (Runciman, Merry, & Tito, 2003). We are witnessing the implementation of reporting tools, such as the National Reporting and Learning System (NRLS) that was set up in 2004 in the UK, and the Incident Information Management System (IIMS) recently installed in New South Wales, Australia. These are examples of digital systems that require clinicians to report “by the end of the notifier's work day” (Department of Health NSW, 2004) any critical incidents they witnessed or were party to. These tools are intended to open up to public scrutiny matters that in the past would have remained contained within closed morbidity and mortality meetings and peer review meetings. Incident reports provide the means for outsiders to formulate judgments about and learn from aspects of care that thus far would have been considered to be the expert's or the unit's privileged preserve.

Each of these three developments, we suggest, is a manifestation of a phenomenon recently articulated by David Armstrong, namely, that of the gaze of medicine turning in on itself (Armstrong, 2002). For Armstrong, a rise in medical reflexivity is evident from the growing attention that started to be paid in the 1960s to how consultations with patients and health consumers were enacted (Balint, 1955). This was the beginning of research into doctor–patient communication (Ong, de Haes, Hoos, & Lammes, 1995), which has more recently culminated in explorations of complex kinds of clinician–patient communication such as end-of-life family conferencing and collaborative care planning. In the present paper, we put forward the argument that the reflexivity that is emerging from the examination and reconfiguration of the medical consultation is being broadened in scope even further. This, we suggest, is an effect of the introduction of the mechanisms just described: critical incident reporting and clinicians’ self-managed investigation of critical incidents, termed ‘root cause analysis’ or RCA.

While the most recent kinds of incident reporting occur in a de-identified format on a distant server, the investigation of critical incidents using RCA is conducted by colleagues from within the organization where the incident occurred. RCA requires that team members conduct interviews with personnel involved in or witness to the incident, draw up causal relationships that extend back from the incident towards factors that helped shape the incident and formulate conclusions and recommendations targeting not individuals’ faults but systems and practice design. Defined thus, RCA is an activity that intensifies the reflexivity already increasingly encompassed by the medical gaze. In extending the medical gaze out from the clinical consultation to include the whole of clinical–medical practice, we argue, RCA inculcates reflexivity across clinical professions and encompasses all clinical practices, not just the patient consultation. In that sense, RCA culminates the rotation of the prism of illness to focus on the doctor (Armstrong, 2002, p. 185).

First, to begin considering the impact on and difference from established clinical practice, we will outline what is involved in doing RCAs. Second, to explore RCA as activity ‘on the ground’, we present a discourse analysis of an RCA investigation as it was conducted by a team of volunteer clinicians. Central to this analysis is a consideration of how clinicians’ interactions harbour uncertainty and, therefore potentially, reflexivity. Third, we will argue on the strength of this analysis that clinicians’ positionings are changing generally as a result of these trans-specialty investigations, and that doctors’ roles specifically are being reconfigured on the basis of an intensification in horizontal scrutiny or ‘concertive control’ (Barker, 1993). We conclude by asking: will RCA and the reflexivity it appears to institutionalize lock clinicians into a micro-analytics of error, or will RCA empower clinicians to reconfigure the health organizational and management structures that dispose them to clinical risk?

The use of RCA in the investigation of clinical errors was mandated in 1997 for hospitals accredited by the US Joint Commission on Health Care Safety (Wald & Shojania, 2001). In 1999, the US National Centre for Patient Safety (NCPS) of the Department of Veterans Health Affairs (VA) piloted RCA at four of its hospitals, and followed this with a full roll-out in 2000. The VA program differed from the Joint Commission requirements of the time by including ‘close calls’ as well as events (US Government Accountability Office (GAO), 2004). The VA model of RCA and RCA training were adopted by the NHS in 2005 and by NSW Health in 2002. At the time of writing (August 2005), NSW Health has trained over 2500 people. It has also produced a second-tier training scheme called “RCA Train the Trainer”, with the aim of further expanding the network of certified RCA investigators in the NSW Health workforce. It was as part of an evaluation of the NSW Health training program that the data presented in this paper were gathered.

Generally, RCAs are set in train when a quality or safety manager, or a department or unit manager, receives a critical incident report form and the severity assessment coding (SAC) of the event is such as to necessitate an RCA. Deaths occurring during clinical care automatically incur an SAC of 1, whereas less serious errors and near-misses have an SAC rating of between 2 and 4. It is mandatory in New South Wales to do an RCA on cases that have an SAC rating of 1, with the investigation of SAC 2s and lower being left to managers’ own discretion.

Procedurally, RCAs involve the appointment of a working party, putting the hospital's quality manager, or someone comparable in charge of inviting appropriate (clinical or administrative) staff, to participate in the investigation. Then, the working group comes together, typically for three meetings over as many weeks, in between which working party members conduct interviews with the clinicians who were involved in the adverse event, study the clinical domain in question to understand the dynamics of the relevant disease(s) and treatment(s) and the organizational aspects of care, and formulate causal relationships conjectured to underlie the incident. At the last meeting, the working party prepares a report that puts forward the main ‘systems’ causes behind the adverse event (Reason, 1997), as well as recommendations to prevent future failures.

This technical–instrumental objective of clarifying the systems causes behind clinical failure notwithstanding, discussing errors in which colleagues are involved is a delicate process, even in organizations where trust levels are high. The discussion of clinical practices among departmental colleagues before errors have occurred is fraught with difficulty (Iedema & Scheeres, 2003), so it is not surprising that the examination of failure by a team of clinicians1 from different departments might give rise to anxiety, shame and defiance. As our data show, thinking through error trajectories, devising questions with which to approach colleagues involved in the error, speculating about causes and deciding on and formulating recommendations are all very loaded processes, particularly if people from ‘the department in question’ are on the RCA team. Close analysis of these processes as they unfold reveals not merely how unfamiliar they are for many of the team members involved, but also how conscious people are of the potentially upsetting and even destructive consequences of their questioning and findings for colleagues who were originally involved in the incident.

On a broader front, we claim that RCA is symbolic of how the communicative landscape in acute care settings is changing. The principle of clinicians interviewing each other about how they enact care represents a broadening and intensifying of the phenomenon of medical reflexivity. Thus far, this reflexivity has been framed in terms of medicine moving from “the post-mortem examination of the body by dissection [as] the defining activity of the anatomo-clinical method” towards “the foundation of the ‘doctor–patient relationship’ [as] a form of consultation inquest” (Armstrong, 2002, p. 162). We show that RCA goes beyond analysis of the medical consultation, and involves various health care professionals in scrutinizing medical–clinical practice as a whole. Before pursuing this argument in detail, we turn to the data we gathered as part of our evaluation of the NSW Health Safety Improvement Practice Program, as a way of illuminating how this horizontal scrutiny and reflexivity operates in practice.

Our data were collected during five 1 hour RCA meetings. Of these five meetings, one addressed a morphine overdose; two were part of one RCA that focussed on a suicide and a further two were part of an RCA that investigated the mis-labelling of a CT scan. It is this latter RCA that we look at more closely in the present section. We chose this particular RCA because it targeted a ‘near miss’ that did not result in death. Because it did not involve iatrogenic death, the investigation of this ‘near miss’ should have been relatively unproblematic and unemotional. We expected and found problems and emotions in the meetings that examined more serious errors, but it is noteworthy that even during this non-death-related CT scan investigation, staff were extremely wary about how they positioned themselves in relation to the issues and clinicians investigated. It is on this basis that we surmise that RCA investigations—even those of less serious iatrogenic errors—play an important role in significantly reconfiguring clinical practices and how clinicians enact their relationships.

By way of background, those present at the CT scan meeting analyzed below included the quality coordinator and team convenor, a nuclear medicine technician working in an area similar to that where the error occurred, a manager of nurses responsible for checking patients into operating theatre for surgery, a clinician-manager deemed able to smooth relations with the manager in the department in question and a manager within radiology from where the CT scan originated.

Our data were collected2 in two forms: as ethnographic field notes (in the case of all five meetings) and as audio recordings (in the case of two of the meetings). The transcripts produced from these two recordings were subjected to discourse analysis. As explained by Iedema (2003), this analysis targets how people speak and enact an activity into being, or how they perform it. In essence, the analysis is tri-focal. First, it examines the ideational dimension of interaction, or how people talk about the (technical details of) the work. Second, it examines the interpersonal dimension through which people position themselves ethically, emotionally and by expressing judgements. Third, and most central to our argument, it examines how this particular kind of speaking unfolds as socio-organizational activity, or its temporal staging (Iedema, 2003). It is this latter analysis that serves to reveal how the talk fluctuates across multiple and contrasting concerns and positionings, rather than simply and linearly going through the procedures that ‘officially’ define RCA as an investigative technique.

Together, these three analytical foci help illuminate the pressures that bear on people to accomplish (if not invent) the practice of RCA as recently introduced organizational practice, and show how participation in RCA meetings produces very different conduct from that which we witness during hand-over meetings (Parker, Gardner, & Wiltshire, 1992) or ward rounds (Manias & Street, 2001). As our data show, what people do and say in RCA team meetings is quite exploratory and uncertain, and far from formulaic and ‘pre-digested’

The central analytical claim in what follows is that the talk at RCA meetings is difficult work. We will demonstrate this by showing how the talk vacillates between interpersonal and ideational issues on the one hand, and, when addressing interpersonal issues, it swings back and forth between affective and critical talk, on the other hand. Thus, at one level, talk about the procedural mechanics of the RCA, the facts or ‘chronology’ of the case and the practical logistics of who to interview and what to ask them is interspersed with talk that is concerned with managing interpersonal relationships. Within the latter, team members are concerned about how to break the news to colleagues that this RCA is under way; how to make sure questions are asked in ways that do not upset people; who to appoint to the task of contacting particular interviewees and asking the ‘hard’ questions, and how to rescue relationships with the interviewees in case the latter become defensive or anxious. At the same time, team members can be heard to make observations that are judgmental of their colleagues’ practices and critical of specific actions. This unstable tenor is further reflected in how the meeting chops and changes as it unfolds, suggesting that team members are apprehensive about their new role and about the potential it has, seriously, to perturb the ‘negotiated order of the hospital’ (Strauss, Schatzman, Ehrlich, Bucher, & Sabshin, 1963).

We now turn to illustrating how team members enacted these dynamics. It should be noted that this particular meeting was selected for analytical presentation here because it does not deal with clinician-caused death but ‘merely’ with a near miss, and that it might, therefore, not be expected to induce the uncertainty and emotional tensions that it nevertheless displays. In what follows, excerpts are presented in the order in which they appeared in the transcript.

The team convenor opens the meeting with a justification for having convened the team.

Excerpt 1

13-A—We’re investigating—this is a root cause analysis on a wrong patient who received a CT and … so that, under the scoring of the SAC system is a SAC 1, for wrong patient for wrong procedure, and so we’ve done a root cause analysis. One of the … this is sort of a bit of a difficult area ‘cause some of the procedures aren’t really of a high-risk to patients, but … like, if you did a … X-ray's not really a high-risk sort of procedure. CT is a higher risk and we actually got a letter from the Radiation Safety Committee requesting that we did an RCA because there's been a few of these over the last couple of years. That's one of the reasons why we’re doing it. One of the other reasons we’re doing it is we’ve recently introduced … we recently did an RCA in radiology for a wrong side and they … one part of that recommendation was to implement this correct patient, correct site, and correct procedure policy, so it's a bit of a concern obviously for us that not long after that implementation, we’ve got another problem. So, this is an RCA.

Having justified the exercise, the team convenor explains what an RCA requires in general:

Excerpt 2

3-A—We’re just looking at the chronology now and what we’ll do is, we’ll go back through—we’ve just sort of had a brief look at it, and we’ll go back through it now and start looking at what sort of questions we need to ask, what's missing—what's the missing bits of information. Basically we had a patient that was brought in by ambulance, they had an X-ray, while they were having an X-ray, they sent for another patient, and there was a bit of a mix up. The patient was sent … the patient that was already in X-ray was sent up to the CT scan and they scanned the wrong patient, and they found out from the patient that should have had a CT scan a day after when she denied having had a CT scan to the neuro-surgical team. Okay?

Soon after, the RCA team members launch into considering the specifics of the case:

Excerpt 3

52–B—And I suppose then they would have been looking at a plain film X-ray because the right X-ray was done on this woman's [body part]4 and then she had the CT, so they would have had to have looked at the two request forms with different names on it.

Here, they discuss the clinical process that is at the heart of the error, drawing on each others’ knowledge of it, but also reasoning through its logic, or how the process might be expected to unfold under ‘normal’ circumstances:

Excerpt 4

66-A—If the form precedes them to radiology, which you think, you know, nurse initiated X-rays, sent off to radiology, does … then, whoever in radiology calls and says, “We’re ready,” does the orderly take back the form to identify the patient in ED? Or do they just speak to nursing staff and say, “I’m here for radiology, who am I taking?”

Part of this discussion is also a tentative exploration and reconstruction of how the error could have occurred:

Excerpt 5

108-A—Allright. So, the patient's called for and the patient arrives … because I think that's actually a … there's a couple of steps here it appears, where the error could have occurred. One, that the … you know, the girl at the desk has made an assumption that the person they’ve called for is the one that they’ve already got. And second, [?5], I guess is that when they arrived in CT, were they checked properly? Do you know what I mean?

At this point, the meeting changes tenor. Having explored how patients, X-rays and CT scans travel through the system, and feeling perhaps that they may have reached the limits of their knowledge, team members do two things. First, they revert to knowledge about the areas of their own expertise and how related processes are regulated and systematized there: “we compare that [what they’ve written on the consent] with what we have on the theatre list”. Second, they link this consideration of how things work their own specialty to an indirect judgment about what goes on in the specialty that is in question, querying why it does not regulate and systematize its processes: “I mean, do they sort of look at it…?”:

Excerpt 6

154-C—Or do they have like … I mean, like, with our theatre, we have the theatre list and the consent, so regardless what they’ve written on the consent, we compare that with what we have [?] patient, but we compare that with what we have on our theatre list as well. If they tally … I mean, do they sort of look at it and look at the X-ray and have a whiteboard with, like—you said there was something where they go, “Left arm,” – I’m not sure, but you know, “Left arm, left arm” on the board, you know, [?], that sort of process as well. So, you need to understand their process.

With this judgmental talk now having seeped into the proceedings, the investigation begins to leap across technical issues, critical comments and people's sensitivities. The following excerpts serve to illustrate this ‘volatility’ (Iedema, Rhodes, & Scheeres, in press). Soon after the previous exchange, team members make explicit that they lack sufficient knowledge of the processes surrounding the incident, and they turn to formulating their questions and identifying who to approach with those questions:

Excerpt 7

184-D—I think there's a variety of people perhaps that need to be asked, “How is it normally done?”, because you’re … it's a possibility that you might use this process and you might use that process, but if you’re hearing from the [laboratory specialist] and the nurse or whatever, and the [specialty doctor], “Well, we do this, this, this, this and this.”

Then, after having framed these questions in technical terms, team members consider what questions now need to be asked: ‘Is it an appropriate workload?’, with the term ‘appropriate’ harbouring a judgment about practice. Immediately following that, this judgmental talk is then reflexively subjected to a consideration of how the question ‘Is it an appropriate workload?’ might be heard by those interviewed. The question is then rephrased such that interviewees’ sensibilities are respected:

Excerpt 8

236-A—… we know that they’ve introduced a new CT scanner down there, and they’ve sort of split – doubled the workload more or less with a similar amount of staff, so … I don’t want to be too sort of too leading in the questions that we ask, but … when we’re sort of talking about the busyness, you know, we need to sort of ask, “Is it an appropriate workload? Appropriate, safe workload?”

237-D—Are there benchmarks?

238—[?]

239-C—Not really, not really

240-A—Would you ask them that sort of question, which might make people feel a little bit defensive or would you say, “How many people were … you know, how many patients did you have in the department at the time?” Like, and they’d say, “Well, it was full,” or you know, “We were half full or … “And then, “And what sort of staffing was around at the time?”, you know, the sort of questions that …

After addressing how to phrase the question about workload, the talk shifts back to reconstructing a technical description of ‘the normal day’:

Excerpt 9

251-E—Yeah, and how was … how was their day structured? Like, do they just keep running all day just continuously or over the—which you would assume was a meal break time—do they actually reduce the number of patients they’re bringing in, because it's that sort of normal activity of daily work we need to understand to be able to suggest any changes or something that might make it safer.

From there the talk turns again towards comments that compare and judge what goes on in radiology against how things are done in the team members’ own specialties (‘Do they just go …’), and these are mitigated (at 278) by ‘I’m probably a bit too leading in my understanding of how things work down there’:

Excerpt 10

272 D—Is it actually recorded as you say—is there a whiteboard where, “We’re expecting at this time or we’ll be doing this and this and this and this?” Do they get written up as the forms are checked out …

273—[?]

274-D—Or do they just go …

275-E—Or do they just say, “Oh, well, we’ll do this one now because that room's going to be free,” or … ?

276-C—You just give them a form.

277-B—Do they have some sort of systematic …

278-A—Well, they do have a system. Again, I’m probably a bit too leading in my understanding of how things work down there—I don’t know if I should …

In this way, the talk moves from reconstructing, in ideational terms, the typical day in the clinical area where the error occurred, to critically gauging the incident against what would have happened in people's own work domains; from formulating probing questions about procedure to phrasing these questions, so that colleagues will not get alarmed or defensive, and from identifying potentially serious shortcomings in colleagues’ practices, to querying the moral basis of how RCA team members frame what others do. Excerpts 11 and 12 harbour related tensions.

Excerpt 11

419-D—Who were they though, like, staff members or were they, like, did they have agency people on somewhere on the ward who didn’t know the process or … because that could make a difference somewhere along the line, couldn’t it? If they’re not … you know, what—is there a handover if they have agency staff? What's the protocol for telling them, “Well, this is what we actually do?”

Excerpt 12

491-A—… So, in the group of questions, it’ll say, “Take the chronology along with you so you can sort of, you know, go through it all.” You’ll have a list of questions, the interviewing style or whatever, helps you to … you know, how to phrase so you’re not putting people on the back foot and making them comfortable. They need to know that, when you finish … when you’re finished with the interviews and you’ve collected all the information, that all the [?] completely destroyed and remains anonymous—which becomes a bit of a problem sometimes when you’re trying to convince the department heads about how you arrived at your conclusion, so that's another story. …

In Excerpt 11, the speaker raises an issue that touches on the identification of specific staff (‘agency staff’) and that is therefore potentially judgmental of those (clinician-managers who work under conditions of limited resources and staff availability) who provide inductions for the agency staff and allocate them to their tasks. Seeking the precarious balance between technical clarity, inquisition and trust, Excerpt 12 emphasizes that while these problems need to be understood, interviewees should not be “put on the back foot” and reassured that any identifying information will be destroyed.

If we scan back across these excerpts, we find that these tensions are realized in yet another way. At the level of individual utterances, there is the following ‘syndrome’ that stands out: a high level of hedging or ‘modalization’ (Halliday, [1985] 1994, p. 335). Modalization is deployed when we, as speakers, feel the need to signal not just uncertainty (about what we know) but also tentativeness (with regard to who we are). Used to mitigate the interpersonal impact of what we say, modalization involves not just items such as ‘would’, ‘might’, ‘may’, but also ‘I think’, I guess’, ‘perhaps’, ‘it appears’, and the like. While these items are prevalent across all the excerpts presented above, Excerpt 7, part of which is reproduced here as Excerpt 14, is most emblematic of this syndrome (we have underlined the modalization):

Excerpt 14

184-D—I think there's a variety of people perhaps that need to be asked, “How is it normally done?”, because you’re … it's a possibility that you might use this process and you might use that process, …”

This modalization at the level of the utterance lends support to our view that this talk is not just a complex weave that ties together technical probing, critical judgment and reflexive concern about interviewees, but it is also a space where RCA team members perform uncertainty about the process, their roles and their knowledges. This complex prosody, we argue, is one demonstration of how undecided team members are about the tenor of their gathering: is it predominantly a technical–rational exercise in order to locate a clear-cut answer? Is it about critically evaluating and judging how work is done? Is it about negotiating and building interpersonal relationships with colleagues involved in errors such that there is learning and change?

Put thus, it becomes clear that RCA team members engage in what is essentially a four-fold task: (1) understanding the technicalities of clinical processes, (2) ensuring that the systems dimensions of errors are kept in view without ‘lapsing’ into blame, (3) manoeuvring around the emotional politics of investigating errors committed by people who may be friends or superiors, and (4) reflecting on one's own moral positioning. Each excerpt presented above highlights these facets of RCA's complex character in its own way. Particularly, each illuminates the extraordinary expectations that are inscribed into this new practice, and reveals the challenges it poses for health care practitioners.

This analysis also suggests that RCA is a typically post-bureaucratic device (Iedema, 2003). The term post-bureaucracy captures the organizational shift that appears to be occurring from pre-determined task and rank definitions towards tasks and roles that are emergent, answering to the changing requirements and local exigencies of work (Heckscher, 1994). To keep up with these dynamics and exigencies, employees are increasingly obliged to communicate and produce information about what they do, or what Hardt and Negri term doing ‘immaterial labour’ (Hardt & Negri, 2004). Drawing on the thesis that work in general is becoming communication-based (Castells, 2004), immaterial labour encompasses not just problem solving, analytical tasks and intellectual work, but also emotions and judgments, or ‘teleo-affective’ work (Schatzki, 2002). As the data analyzed above reveals, in RCAs this teleo-affective work takes centre stage alongside its technical–rational counterpart, with team members dynamically balancing critical judgments, emotional reactions and reflexive considerations.

In the section that follows, we extrapolate from this analysis to what is happening across contemporary health care more broadly, and discuss how it links to the shift towards accountability and transparency in how clinicians do and improve their work. First, we explore how RCA intensifies the reflexivity inscribed into the contemporary medical gaze by expanding it to encompass medical–clinical practice in general. Second, we capture the ways in which RCA, besides realizing local forms of learning and change, potentially limits the examination of adverse events to a micro-sociology of clinical failure, disabling clinicians from intervening in matters superordinate to clinical treatment design, such as the structuring of resource allocation and the organization of hospital services generally.

Section snippets

Discussion

Our analysis demonstrated that RCA creates a modality of relationship among clinicians that is emergent; that is, a new practice of communication about the clinical work among clinical colleagues in multi-disciplinary teams that they to some extent have to ‘invent’ as they go along, because it has not as yet been able to—and may never—settle into a taken-as-given social-organizational routine. As seen, participants tentatively elaborate their roles and tasks even in a case that does not deal

Conclusion

The above analysis has shown how clinicians investigating their colleagues’ practices become engaged not just in a technical enquiry into the structured causes of error, but in a performance of new kinds of conducts and sensibilities. Our analysis revealed that RCA encompasses four challenges. RCA:

  • 1.

    expects that team members come to terms with the technicalities of their colleagues’ practices;

  • 2.

    presumes that the separation of a systems perspective from personal blame is unproblematic;

  • 3.

    requires that

References (35)

  • M. Balint

    The doctor, his patient and the illness

    The Lancet

    (1955)
  • L.M.L. Ong et al.

    Doctor–patient communication: A review of the literature

    Social Science & Medicine

    (1995)
  • S.J. Williams et al.

    The ‘limits’ of medicalization? Modern medicine and the lay populace in ‘late’ modernity

    Social Science & Medicine

    (1996)
  • D. Armstrong

    A new history of identity: A sociology of medical knowledge

    (2002)
  • J. Barker

    Tightening the iron cage: Concertive control in self-managing teams

    Administrative Science Quarterly

    (1993)
  • U. Beck

    Risk society: Towards a new modernity

    (1992)
  • Bosk, C. (2003). Forgive and remember: Managing medical failure 2nd ed. Chicago and London: University of Chicago...
  • T.A. Brennan et al.

    Incidence of adverse events and negligence in hospitalized patients: Results of the Harvard medical practice study

    New England Journal of Medicine

    (1991)
  • Castells, M. (2004). The power of identity. The information age: economy, society and culture, 2nd ed. Vol. 2. Oxford:...
  • Learning from Bristol: The report of the public inquiry into children's heart surgery at the Bristol Royal Infirmary 1984–1995

    (2001)
  • Incident information management system circular

    (2004)
  • N. Douglas et al.

    Inquiry into obstetric & gynaecological services at King Edward Memorial Hospital 1990–2000

    (2002)
  • A. Dzur

    Democratizing the hospital: Deliberative-democratic bioethics

    Journal of Health Politics, Policy and Law

    (2002)
  • M. Foucault

    The birth of the clinic: An archeology of medical perception

    (1973)
  • Halliday, M.A.K. (1994). An introduction to functional grammar 2nd ed. London: Edward...
  • M. Hardt et al.

    Multitude: War and democracy in the age of Empire

    (2004)
  • C. Heckscher

    Defining the post-bureaucratic type

  • Cited by (88)

    • Clinical features and outcomes of unplanned single lung transplants

      2022, Journal of Thoracic and Cardiovascular Surgery
    • Developing and Implementing Patient Safety Standards Within the Pharmacy Practice and Education Settings

      2019, Encyclopedia of Pharmacy Practice and Clinical Pharmacy: Volumes 1-3
    View all citing articles on Scopus
    View full text