Statistics from Altmetric.com
‘The Problem with…’ series covers controversial topics related to efforts to improve healthcare quality, including widely recommended, but deceptively difficult strategies for improvement and pervasive problems that seem to resist solution. The series is overseen by Ken Catchpole (Guest Editor) and Kaveh Shojania (Editor-in-Chief).
Seminal reports that launched the modern field of patient safety highlighted the importance of learning from critical incidents.1 ,2 Since then, incident reporting systems have become one of the most widespread safety improvement strategies in healthcare, both within individual organisations and across entire healthcare systems.3
There are some strong examples of learning and improvement following serious patient safety incidents.4 ,5 But major disasters have also revealed widespread failures to understand and respond to reported safety incidents.6 ,7 Between these two extremes exists a range of frustrations and confusions regarding the purpose and practice of incident reporting.8–10 These problems can be traced to what was lost in translation when incident reporting was adapted from aviation and other safety-critical industries,11 with fundamental aspects of successful incident reporting systems misunderstood, misapplied or entirely missed in healthcare. This mistranslation of incident reporting from other industries has left us with confused and contradictory approaches to reporting and learning, seriously limiting the impact of this potentially powerful safety improvement strategy.
From orange wires to filing cabinets
The original ambitions for incident reporting in healthcare were deceptively simple. Staff would identify and report problems and mishaps; patient safety risks would be investigated and addressed and the resulting lessons would be widely shared and implemented.12 A powerful symbol of this ambition was the ‘orange wire’.13 Successful patient safety incident reporting systems would support system-wide learning in the same way that the discovery of a defective ‘orange wire’ in a particular aircraft type might cause rapid and systematic action across the entire aviation industry.13 But, in translating incident reporting into healthcare from aviation, what was largely missed was that, in airlines and other industries, the rapid detection and resolution of safety issues depend on a deeply embedded and widely distributed social infrastructure of inquiry, investigation and improvement.
Incident reports provide brief—and usually ambiguous and sometimes mundane—triggers for collective inquiry and coordinated action. The incident reports themselves do not matter nearly as much as the practical work of investigating and understanding a particular aspect of an organisational system and then working collaboratively to improve it.14 In aviation, incident reporting systems grew out of a decades-long history of conducting routine, structured, systematic investigations into the most serious aviation incidents and accidents.
Healthcare has nothing like this history of systematic investigation. Instead, incident reporting systems have focused on collecting and processing large quantities of incident data.15 The orange wire has been supplanted by another image drawn from aviation and described early in the patient safety movement—the filing cabinet: ‘In 1989 British Airways possessed 47 four-drawer filing cabinets full of the results of past investigations. Most of this paperwork had only historic value. An army of personnel would have been required if the files were to be comprehensively examined for trends or to produce useful analyses’.2
Rather than recreating the organisational infrastructures that underpin routine investigation and coordinated inquiry in aviation, healthcare has simply reproduced the filing cabinets. This focus on the quantity of incidents reported rather than the quality of investigation and improvement activities has perpetuated a range of inter-related problems (table 1).
Complications, confusions and contradictions
The problems that beset incident reporting in healthcare span the confused role of measurement, the unclear relationship with performance management, the underspecified processes of investigation, and the complicated nature of learning and improvement.
‘Report it all’
Criteria for which incidents to report tend to be framed broadly—‘any unintended or unexpected incidents that could have or did lead to harm’.12 This catch-all definition misses an important opportunity for using reporting criteria to shape attention and set priorities. Specific and detailed reporting criteria can evolve over time as understandings of patient safety risks evolve and can encourage the reporting of precursor and near-miss events, such as missing critical equipment or poor staffing levels. Reporting criteria can—and should—always include a residual category that catches any other safety relevant events.16 But, making such a catch-all category, the main definition misses the opportunity to set the safety agenda and focus on key risks from the outset.
‘More is better’
Higher levels of overall reporting reflects a better safety culture,17 and increasing reporting seemingly constitutes a constant goal in many healthcare systems. But the frequency of reporting events represents a blunt measure with several complications. First, it ignores the critical question of learning from incidents. Repeated reports of the same type of event suggest a strong culture of reporting but a poor culture of learning. Second, a focus on quantity over quality leads to large numbers of reports with little new information. For instance, falls account for approximately one-fifth of the 1.7 million incidents reported to the National Health Service (NHS) National Reporting and Learning System.18 Arguably, the incidence of common types of patient falls could be better recorded through other means, leaving incident reporting systems to focus on the most serious, unusual or unexpected events from which most can be learnt. In airlines, safety investigators worry about over-reporting—potentially swamping important signals with noise.14
‘Incidents measure safety’
Numbers or rates of reported incidents offer a particularly poor way of measuring safety performance.8 Yet, trends and charts of reporting rates remain commonly used organisational safety measures. Incident reporting systems were never intended to provide a system of measuring safety problems.19 These systems detect only a tiny fraction of adverse events,20 with reporting rates determined by a range of cognitive, social and organisational factors. Reduced reports of a particular type might simply indicate that people became accustomed to something happening, grew tired of reporting or stopped noticing the problem in question. Thus, when reports decline, incident data on their own cannot distinguish between a reassuring improvement in safety or a concerning organisational blindspot.
‘Reports are biased’
Safety incident report data contain numerous biases.21 This is unavoidable. Incident reports begin with one person's partial view of a complex clinical and organisational situation, and reporting behaviour reflects a range of social factors.22 While these biases present a weakness in terms of epidemiological measurement, they can present a strength in terms of safety management. For instance, aviation incident reporting systems actively harness surveillance bias—where the more you look, the more you find. Highlighting a troubling problem can lead to more people noticing events and precursors, increasing reporting and generating richer, broader insight. What makes for horrible statistics can make for wonderful learning.
‘Improve data quality’
There are continual calls to improve the quality of reported incident data and complaints regarding the epidemiological quality of reported incident data.21 ,23 Collecting relevant, useful and meaningful information through incident reporting systems is important. But the use of incident data needs to be understood in relation to its purpose. The primary purpose of incident reports consists of identifying an underlying risk in the healthcare system and determining the need for further investigation and analysis. Because incident data cannot establish epidemiological trends in safety (they say more about trends in reporting behaviour), incident reports do not need much detail. For any important event, the resulting in-depth investigation provides the level and quality of detail required.
If a patient receives a medication intended for another patient, that simple fact speaks for itself as a critical event worth investigating. Why ask for additional details in the report when these details may prove incorrect? As a common saying goes in aviation safety—early reports are often inaccurate and usually entirely wrong. Improving the quality of incident data thus misses the purpose of reporting—triggering inquiry. The need for improved quality lies with the investigations, not with the reports themselves
‘Taxonomy is key’
Increasingly sophisticated taxonomies24 help establish a meaningful and logical ontology of patient safety problems and causal factors. But, incident reporting systems require efficiency more than sophistication. Categorisation schemes need to relate events with similar characteristics, capture key clinical and system factors, and support search and analysis purposes. Yet, most reported incidents include limited information, and asking for more only discourages reporting (and often generates inaccurate information). Subsequent deeper investigation will reveal the important details. Thus, taxonomies need to be pragmatic and flexible to accommodate these varied purposes.
‘Tell your boss’
Many incident reporting systems involve staff reporting incidents to their superiors. It is often entirely appropriate that supervisors and line managers are notified of events and directly involved in any investigation and response. However, reporting directly to a line manager potentially influences what is disclosed and can introduce a damaging filter that prevents bad news being passed up a hierarchy. The typical model in aviation and other industries has incident reporting systems operated and managed by an independent safety team that reports directly to the board level.14 This independence ensures an unfiltered and honest account of safety issues within an organisation. It also avoids the problem of line managers inappropriately using incident reports to discipline or punish staff. Critically, the most serious incidents and accidents in aviation and other industries are reported to and investigated by an entirely independent national safety investigator to ensure that the system-wide causes and required improvements can be impartially identified.25 ,26
Interestingly, this approach is currently being developed in England, building on the model used in other safety-critical industries. In response to recent proposals,26 a parliamentary select committee inquiry recommended that a permanent national independent body be established to investigate the most serious patient safety incidents and systemic risks.27 The government has accepted this recommendation and expects an independent patient safety investigator to be in place in England by April 2016.28
‘Report and feedback’
Incident reporting systems are intended to provide an integrated view of the safety issues emerging across an organisation or healthcare system,29 as well as a structure within which those issues can be collaboratively investigated and addressed. Both of these aims depend on actively engaging with staff: drawing on the collective intelligence of staff to build a picture of emerging risks and working with them to understand and address those of highest priority.30 Too often in healthcare, incident reporting remains a relatively passive process of submitting reports on one hand and issuing feedback on the other—a process of information transfer rather than participative improvement.
This passivity and lack of two-way engagement creates several problems. Staff can perceive incident reporting as simply a way of logging problems and waiting for fixes, removing any responsibility for local improvement. Conversely, staff can simply fix a problem themselves and never report it, removing the opportunity for broader learning and sharing of insights.31 Moreover, a significant proportion of patient safety incident reporting systems appear to provide very little feedback to staff whatsoever.32 ,33
Feeding back information to staff is critically important to demonstrate the value of reporting and inform staff of actions taken and lessons learnt. But even the principle of ‘feedback’ remains relatively passive and transactional. An incident report represents someone speaking up, stating that an issue concerns them and that they have an interest in its improvement. Rather than simply collecting and feeding back information, incident reporting systems should provide spaces that encourage open conversation, participative investigation and collective improvement of safety.
‘Incidents produce learning’
The core functions of an incident reporting system are twofold. One is to use incidents to identify and prioritise which aspects of a healthcare system and its underlying risks need to be examined more closely.34 The other is to organise broader investigation and improvement activities to understand and address those risks. These active processes of investigation, inquiry and improvement underpin learning. However, analysing incident reports as data constitutes the core focus in many safety reporting systems in healthcare.
Analysing incident report databases can offer some insights and utility. But, broader investigation, inquiry and action are needed to drive actual learning and improvement. At best, an incident report offers a trigger for further investigation and inquiry into a specific event or system issue. Analysing incidents does not itself produce learning. Equally, ‘lessons learnt’ from patient safety incidents are often held up as taking the form of an ‘organisational safety alert’, an updated policy, or a new set of recommendations. But learning is a complex social and participative process that involves people actively reflecting on and reorganising shared knowledge, technologies and practices.35 It is these processes of action and reorganisation that constitute learning and must be supported through investigation and improvement.36 The search for safety starts, rather than ends, with incident reports.
‘Accounting for failure’
Incident reporting systems are increasingly being drawn into the realm of performance management, with incident data being used to hold organisations to account for safety performance.37 Aside from deep problems relating to measurement, this use of incident reporting data for summative judgement of performance can run counter to more formative processes of learning and improvement.38 This can create potential pressures for gaming reported data and focuses on counting—and accounting for—failure. Instead of using incident reporting systems to account for failure, they can be more productively used to create regimes of mutual accountability for improvement. In other industries, incident reporting systems provide a space in which individuals, groups and organisations explain and address the sources of risk in their area of responsibility. Managers must account for the improvements made as a result of incident investigations.14 Rather than assigning responsibility for causing failures, incident reporting should assign responsibility for improving systems.
From reporting incidents to sharing improvements
Patient safety incident reporting is beset by problems, but the solutions to these problems become apparent when incident reporting is viewed as a process of social and participative learning, rather than as a mechanism of data collection and analysis. At core, incident reporting systems provide an infrastructure for detecting emerging risks, investigating and explaining serious incidents and harmful events and for understanding and improving the practices and systems of healthcare.
In the past 15 years, healthcare has focused primarily on building the technical infrastructure for incident reporting systems: online reporting systems, data collection forms, categorisation schemes and analytical tools. These are all important foundations. But this focus on incident data is also the source of many of our current problems with incident reporting: we collect too much and do too little. Learning depends critically on the less visible social processes of inquiry, investigation and improvement that unfold around incidents. Over the next 15 years we must refocus our efforts and develop more sophisticated infrastructures for investigation, learning and sharing, to ensure that safety incidents are routinely transformed into system-wide improvements.
Competing interests CM declares consultancy in patient safety for NHS and other healthcare organisations. More recently, CM acted as an advisor to the Public Administration Select Committee inquiry into the investigation of clinical incidents in the NHS, and is a member of the Independent Patient Safety Investigation Service expert advisory group.
Provenance and peer review Not commissioned; internally peer reviewed.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.