Article Text
Statistics from Altmetric.com
The capacity for learning is directly affected by how potentially dangerous events are interpreted and categorized
The categories used by organizations to classify and sort events are not trivial; they channel attention, shape interpretations, and serve as springboards for action. One example is the way in which organizations categorize small failures. Some organizations classify mistakes that have been caught and corrected with no untoward consequences—such as a near collision in aviation—as a “near miss”, a kind of failure that reveals how close the organization came to disaster. Other organizations do just the opposite. They look at a near miss and label it a “close call”, seeing it as evidence of success and the ability to avoid disaster.1,2 The problem with this is that organizations that see mishaps as close calls often fail to treat these events as possible warnings that signal areas of vulnerability. By labeling a near miss as a “close call”, the cycle of learning is curtailed: beliefs that current operations are adequate to contain disaster are reinforced which, in turn, limits the search for information and also circumscribes actions to safeguard future operations.2 Alternatively, when close calls are seen as near misses, it is clearer that the event is evidence of danger in the guise of safety rather than evidence of safety in the guise of danger.1,2
The main message is that learning from experience is harder than it looks, both for individuals and for organizations. The capacity for learning and the accumulation of knowledge is directly affected by how potentially dangerous events are interpreted and categorized, a key point reflected in the paper by Tamuz et al3 in this issue of QSHC which studied medication errors and how they were reported and handled, particularly by pharmacists. The study reveals a number of intricacies and unintended consequences of error reporting systems in complex healthcare organizations, and shows how the classification of a medication error set into motion a number of organizational routines for gathering and keeping safety related information, for analyzing and investigating errors, for rewarding or penalizing personnel, and perhaps, most importantly, for learning from failure events. The authors take seriously the idea that understanding the systemic aspects of error and its prevention requires a methodology that takes into account the understanding of the event by the participants. Thus, they privilege the perspectives of those who provide care. This is a refreshing departure from most studies of error and its incidence which use data from medical records or other archival sources.
Tamuz et al found that pharmacy staff did not consider an error (such as a prescribing error) to be a reportable incident if it was caught and corrected in the pharmacy; medication errors that slipped through the pharmacy and reached the floor or the patient (such as an error of dispensing or administration) were reported to the hospital incident reporting system, were formally investigated, and sometimes resulted in negative consequences for the pharmacist deemed accountable for the mistake. Errors detected and corrected by pharmacy staff were labeled as “interventions” and were considered to be “non-events”—acceptable mistakes that occurred as a natural part of the work flow. “Defining away” potential close calls led to a systematic and severe underestimation of the number of reported medication errors. Intervention data were consolidated, analyzed, and used by pharmacy managers to reward pharmacists who intervened (and to encourage future interventions), and were also used as the basis for pharmacist education programs and for making improvements to pharmacy practices. However, these data were not shared with other groups such as physicians or others who might benefit from such information, nor were they requested by physicians and others who might be interested in building safer pathways of care. And herein lies a set of important issues that were not (and have not been) addressed in patient safety research. Why didn’t pharmacists tell what they knew? Why didn’t physicians ask pharmacists what they knew? And why didn’t physicians ask why they weren’t told?
The tendency in organizations is to interpret no news as good news. If one hears nothing one assumes that things are safe, that things are going well. But things may not be safe. Studies of high risk but highly reliable organizations such as nuclear powered aircraft carriers and chemical manufacturers,4,5 which share with medicine similar contextual characteristics and a potential for adverse events that materialize from small failures, suggest that safe operations require a sensitivity to the way in which activities are interrelated.4,5 It is in those relationships, those handoffs where errors accumulate or are caught. Medical professionals are to be praised for their tendency to attack the problem of medical errors by working to improve technical skills, but the locus of the problem lies upstream and downstream from the skilled individual—in the organizing and connecting of activities.4,5
Establishing error incidence rates will be difficult—if not impossible—to achieve given the ambiguities in defining what constitutes medical error and other difficulties with reporting systems shown in the study by Tamuz et al. Future studies may therefore want to focus on understanding strategies to recover from error and ways to create resilient systems that can learn from near misses and mistakes.6 Medicine is often driven by the idea that perfection is the ultimate goal and that mistakes are a personal and professional failure. This mindset, while praiseworthy, can blind people to the idea that mistakes are normal and can provide opportunities to learn. However, learning cannot take place in a context where information about mistakes is disconnected, feedback is limited, and where people do not recognize vital interdependencies.
The capacity for learning is directly affected by how potentially dangerous events are interpreted and categorized