Article Text
Statistics from Altmetric.com
STILL LEARNING HOW TO LEARN
The emergence of safety in health care as a legitimate systemic public issue has turned on its head a good deal of our traditional thinking. One area where this has been particularly acute is what we could call “medical epistemology”—how we know we know something in health care. When health professionals first encounter the study of safety in health systems, they frequently feel a tension between the familiar world of epidemiology and bioscience and this strange new world, with its roots in psychology and engineering and its methods seemingly subjective and anecdotal. Cook1 pictures them caught between these two worlds, trying to learn new ways of learning about safety but holding on to the security blanket of more familiar evidentiary methods. Their problem goes beyond simple intellectual assent; to some extent there seems to be an emotional and aesthetic—one might even say visceral—discomfort with these new methods, a fear that letting go of the evidence-based life ring will inevitably lead to superstition, myth, and chaos. This tension has led to debates about proper methods,2 with both sides largely preaching to the already converted.3
The classic paper by March et al4 republished here should help redirect this conversation into more productive areas. By describing in detail ways that organizations learn, or attempt to learn, from history when history has not been “generous with experience”, they remove some of the mystery that has been associated with learning from what have often been disparaged as mere anecdotes.
Most of the events of interest in patient safety are relatively rare—in fact, we wish they were even rarer. The traditional biomedical approach to the problem of scant data is pooling experience across multiple events, but for safety this is often not practical and, even more often, not desirable for two reasons: (1) the events of interest are so highly contingent on a specific context that pooling loses rather than adds information, and (2) some failures such as death from mistaking potassium chloride for furosemide are so devastating that we cannot afford to wait for more events to draw the appropriate safety lessons. March et al point out that organizations in this situation can improve their learning in three ways. Firstly, they can experience events more richly by attending to more aspects of them and by allowing for more interpretations, viewpoints, and preferences on them. Secondly, they can enrich their experience by attending to near-events—failures narrowly averted or successes improbably achieved—and hypothetical events, such as by simulation or predicting possible modes of failure. Finally, they can develop better understandings of events by allowing them to emerge from the details of a variety of aspects of the experience rather than by formal analysis of unambiguous objectives in a priori models.
This latter process contrasts sharply with familiar inferential methods. Although it is clearly not as well specified or understood as traditional verification and validation research, it is well established in the process of building scientific knowledge,5 even though it is not commonly admitted in health care. (It is, however, familiar, because it is how we teach clinical medicine.) This is not to say that the sort of inductive theory building described by March et al is by any means perfect; it faces challenges to reliability and validity and would appear to be more difficult to reduce to a “method” or to teach to others. However, by describing some of the details of the process and posing some critical questions about it, March et al have made this sort of learning more accessible to researchers and managers, and raised hopes that progress can be made in finding the proper relationship between the two research worlds.
While it seems clear that organisations sometimes do learn, what is even more striking in health care is how often they do not learn at all. Tucker and Edmondson6 have pointed out how failures of organizational learning in hospitals result from the confluence of organizational and psychological factors. Their analysis reinforces the points made by March et al. They found that healthcare organizations have difficulty learning from experience because, in a sense, they have no experience. Problems and failures encountered by front line practitioners are commonly resolved on the front lines by a process of empirical patching, so their very existence is never known to much of the organization. What knowledge is gained is shared primarily among similar social groups—for example, nurses speaking primarily only to other nurses—so the number of viewpoints, interpretations, and values available is limited. Because of production pressure, front line workers have no time or energy to invest in examining near-events or to engage in “what if” projections of possible failures. Finally, when data do reach managers or investigators situated to respond less reactively, their training has not provided them with the analytical skills suitable for abstraction and inductive empirical generalization from rare events and, in fact, may have biased them against it.
If health care is to become safer, we will need to find ways to enhance our learning from small samples. This may not be as difficult as it seems at first, as it may involve no more than bringing into the open processes that we have traditionally suppressed but which were operating nonetheless. For example, a recent review of the management of shock syndromes in children in an attempt to develop practice guidelines deplored the paucity of high quality level I evidence while simultaneously noting that mortality had fallen by a factor of 10.7 Clearly, some learning had taken place despite the lack of “evidence”! It is interesting to speculate that much of the success of “scientific” medicine might depend more on the subjective decisions of thoughtful clinician researchers about what would be worth trying than on the rigour of its inferential methods.
Admitting new sorts of evidence and new ways of thinking is always risky. We could clearly learn the wrong lessons some of the time and we need to learn a great deal more about what distinguishes good from poor quality inferences drawn from samples of one or fewer. But the alternative—to discount this method of learning altogether—seems much less palatable. The paper by March et al should move us to learn more about how to learn about safety.