Article Text

Download PDFPDF

The frustrating case of incident-reporting systems
  1. Kaveh G Shojania
  1. Dr K G Shojania, Room D470, 2075 Bayview Avenue, Toronto, Ontario, M4N 3M5, Canada; kaveh.shojania{at}

Statistics from

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Even for those interested in patient safety and quality improvement, incident-reporting (IR) systems often represent a source of frustration, rather than a useful tool for capturing important patient-safety and quality-of-care problems. IR systems suffer from well-known limitations.1 They detect only a small percentage of target problems,2 and the incidents that users do choose to report often include a large percentage of mundane events. Underuse of IR systems is particularly marked among physicians. In the survey reported by Farley et al(see page 416) in this issue, 86% of hospitals responded that physicians submitted “few or no” incident reports.3 This poor showing among physicians may reflect the misperception that incident reports fall under the jurisdiction of nurses and pharmacists. However, other reasons undoubtedly include the same factors that affect the use of IR systems by other healthcare professionals, including the time to fill out reports and the perceived utility of doing so.4

A more fundamental problem that bedevils the use of IR systems is that they generate numerators without denominators: X patients bled while receiving anticoagulants, and Y patients fell out of bed, without any indication of the total numbers of patients at risk for these events. In principle, hospitals could follow trends in these numerators on the assumption that the unknown denominators remain relatively constant over time. However, IR systems typically detect such small numbers of the targeted events that even small changes in reporting practices can produce large changes in the apparent incidence of events.

An incident during a recent rotation as the attending physician on an inpatient teaching unit illustrates the problem. Frustration with the nurses’ inattention to one of our patients elicited from me a grumble about the quality of nursing care in general on that ward. One of the interns (fortunately, the only person within earshot of my comment) remarked that the team had encountered numerous problems on that unit over the past 2 months—important medications not administered, inattention to orders for stat blood work and so on. He added: “Last month we submitted a bunch of incident reports, but nobody seemed to care, so we stopped bothering.”

Thus, the IR data for that unit will likely show an upward tick in the frequency of various events for the month of July and then a return to baseline in August. Rather than reflecting any change in risks to patients, this change will simply reflect the arrival of a new cohort of trainees, initially enthusiastic to bring a number of concerning problems to the attention of hospital administrators, followed by the loss of interest in attempting to do so. In addition to illustrating the difficulty in interpreting temporal patterns in incident reports, the intern’s comment highlights one of the most important shortcomings even of well-intentioned efforts to promote the use of IR systems: they typically produce no change. Failure to act on (or even respond to) incident reports submitted by front line personnel creates a vicious circle: staff become even less likely to take the time to report incidents and administrators consequently regard IR system as producing no useful data.

The results of the study by Farley et al3 capture both the problems that lead to the sad state of affairs regarding hospital IR systems and the missed opportunities inherent in this state. Surveying a large, representative sample of hospitals in the United States, these investigators found that almost all institutions (98%) report having a centralised IR system. Unfortunately, the nature of these systems is difficult to infer from the results. Presumably in the interest of achieving an adequate response rate for their survey, the investigators avoided burdening participants with detailed questions about key characteristics of their IR systems. Thus, for instance, the 75% of non-critical access hospitals that use both paper and computer IR reporting systems likely encompass a wide range of system types in terms of what the combinations of paper- and computer-based reporting really mean and what they accomplish.

Similarly, approximately 75% of hospitals reported distributing IR data to key hospital departments and committees, but the nature and impact of this activity undoubtedly vary—from communications that elicit little more than perfunctory discussion at committee meetings to detailed review of potentially concerning events or patterns and undertaking of action plans in response to some of the data. Despite the likelihood of wide variation in what respondents meant by some of their answers, it is striking that 25% of hospitals do not even attempt to distribute IR data to relevant hospital departments and leaders.

Two questions emerge from consideration of the data reported by Farley et al3: how best should hospitals undertake IR and how should they monitor for safety problems in general? Regarding the improvement of IR, the system must be easy to use. Overwhelming users with detailed questions serves only to decrease the likelihood that users will submit reports. While the system must identify the patient and provide sufficient detail to permit classification of the event, the goal of classification lies in triaging the need for further investigation. Many incidents even if important (eg, common adverse drug events, patient falls, decubiti) do not warrant investigation as isolated incidents. In such cases, the IR system should simply capture the incident and the extent of injury to the patient, not barrage users with a series of root cause analysis-style questions about the factors contributing to these events.

The proper targets of IR systems consist of rarer events that reveal important system problems unlikely to be revealed by other means. While root cause analysis questions must be addressed in these cases, such questions are better asked and answered by personnel trained in accident investigation, not front-line users reporting the incidents. Thus, even in the case of critical incidents, the goal of the IR system lies in incident classification and triage. Achieving this goal requires very little input from users and can even be menu-driven, with the creation of explicit categories based on incident type (eg, patient misidentification, administration of the wrong medication, dosing errors involving high-risk medications) and the severity of injury. Impressions from users regarding the potential causes are of secondary importance compared with simply capturing the event so that trained personnel can investigate further. Capturing such events has such high value that some hospitals heavily invested in patient safety allow users simply to enter a very brief description into the IR system or even make use of a telephone system (or even call centres, in the case of large regional IR systems) in order to increase ease of reporting and timely investigation.

In terms of how best to monitor for safety problems in general, no single approach adequately detects the full range of target events.15 Thus, hospitals must choose several of the many available methods of screening for and detecting patient-safety problems. In addition to IR, such methods include stimulated voluntary reporting (eg, confidentially contacting clinicians and asking them about the occurrence of critical incidents)6 random chart audits, traditional clinical venues such as the autopsy7 and morbidity and mortality conferences, administrative data,8 chart-based trigger tools,9 computerised surveillance of medication and laboratory data for signals of potential adverse events,10 natural language screening of routinely available electronic records such as discharge summaries,11 patient complaints12 and executive walk rounds.13 More intensive methods include direct observation (eg, of medication administration)14 and real-time prospective surveillance.15 These methods often detect different types of events;5 hence the recommendation by many experts to undertake more than one approach to monitoring for patient safety problems.

Despite the frequent frustrations of IR systems, encouraging examples do exist.1617 Successful systems use various methods to encourage reporting and enhance usability, provide feedback to users, communicate data effectively to hospitals leaders and tightly couple IR with improvement efforts. In order to emulate the successes of such systems, organisations must recognise that the generation of periodic reports from IR systems does not constitute an end in itself. IR systems must stimulate improvement. Achieving this crucial goal requires collection of data in such a way that important signals are not lost amidst the noise of more mundane occurrences and so that hospital administrators do not experience information overload. If submitting incident reports produces no apparent response from hospital administrators, front-line personnel will predictably lose interest in doing so. In addition to undermining efforts to monitor for safety problems, lack of meaningful change will negatively impact the culture of the organisation in general.


KGS is supported by a Government of Canada Research Chair in Patient Safety and Quality Improvement.



  • Competing interests: None.

Linked Articles