Context: A major purpose of incident reporting is to understand contributing factors so that causes of errors can be uncovered and systems made safer. For established reporting systems in US hospitals, little is known about how well the reports identify contributing factors.
Objective: To characterise the information incident report narratives provide about contributing factors using a taxonomy we developed for this purpose.
Design: Descriptive study examining 2228 reports for 16 575 randomly selected patients discharged from an academic and a community hospital in the US between 1 January and 31 December 2001.
Main outcomes measured: Reports in which patient, system and provider (errors, mistakes and violations) factors were identifiable.
Results: 80% of reports described at least one contributing factor. Patient factors were identifiable in 32%, most frequently illness (61% of these reports) and behaviour (24%). System factors were identifiable in 32%, most commonly equipment malfunction or difficulty of use (38%), problems coordinating care among providers (31%), provider unavailability (24%) and tasks that were difficult to execute correctly (20%). Provider factors were evident in 46%, but half of these reports contained insufficient detail to determine which specific factor. When detail sufficed, slips (52%), exceptional violations (22%), lapses (15%) and applying incorrect rules (13%) were common.
Conclusions: Contributing factors could be identified in most incident-report narratives from these hospitals. However, each category of factors was present in a minority of reports, and provider factors were often insufficently elucidated. Greater detail about contributing factors would make incident reports more useful for improving patient safety.
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Incident-reporting systems in high-risk industries are considered models for healthcare1–3 and follow a common approach derived from the “critical incident technique.”24 In the US aviation reporting system, for example, pilots and others submit detailed narratives containing first-hand observations about what happened and why. To encourage such insights, the system promotes timely reporting, emphasises near-misses and shields reporters from blame.256 Analysts use the narratives to understand how “system factors” (facilities, equipment, tasks, personnel availability and qualifications, and organisational culture and goals) might have contributed to incidents, either directly or by influencing human performance. They also examine human errors, mistakes and violations.7–9 Then the analysts formulate recommendations for system improvement.256
Reporting systems following this approach have apparently succeeded in some healthcare settings.10–13 In US hospitals, however, incident reporting systems diverge from this approach, having been developed primarily to prevent and prepare for litigation.1415 Many providers believe that the reports are used to assign blame rather than improve safety and that reports can be exposed during litigation.16–19 Such unfavourable attitudes may explain why hospital reporting systems capture relatively few errors and adverse events.20–22
Importantly, these same attitudes may also deter the providers who do report from providing rich detail about system factors and provider errors, mistakes and violations (herein, “provider factors”). In this study, our objective was to characterise, in two US hospitals, the information incident report narratives provide about contributing factors. Patient, system and provider factors represent opportunities to prevent patient injuries; therefore, we examined all three.
In terms of their purpose, functioning and definition of reportable incidents, the reporting systems we studied are typical for US hospitals. Reporting culture has evolved variably in recent years,2324 and most hospitals still use paper reports (C Pham, personal communication, The RAND Corporation, Santa Monica, 2006).3152324 Because the study hospitals introduced electronic reporting in 2002, we chose to study paper reports from 2001.
We examined contributing factors in a representative sample of about 2000 incident reports for inpatients at two hospitals.
We studied an academic referral centre and a nearby affiliated community hospital in a major metropolitan area in Southern California. The hospitals had the same reporting procedures; reporting culture had not been formally evaluated. Reportable incidents were defined as “Any occurrence that is not consistent with the routine operation of the medical centre and that potentially may, or actually did, result in injury, harm or loss to any patient, or visitor of the medical centre.” Reporting was voluntary but not anonymous. Reporters completed one-page paper forms, handwriting narratives in a 6.4×10.8 cm area and attaching as many pages as desired. Nursing staff were trained in submitting reports, but not in describing contributing factors.
To obtain a representative sample of reports for a hospitalised population, as described in a related paper,25 we randomly sampled enough patients discharged in 2001 to evaluate about 1000 reports at each hospital. At both hospitals, 9% of patients had one or more reports. We selected all 9850 patients at the community hospital. At the academic hospital, patients with prolonged hospitalisations often had multiple reports, which decreased the number of patients needed; we used an SAS random-number generator to select 30.0% (6725) of 22 430 patients. Because discharge databases listed hospitalisations rather than patients, we oversampled patients with multiple discharges and, therefore, weighted subsequent analyses to adjust for this. Institutional Review Board approval was obtained; informed consent was not required.
We developed a hierarchical taxonomy that included patient, system and provider factor categories at the top (“major categories”) and increasingly detailed subcategories on lower tiers. Patient-factor subcategories were based on pilot work with incident reports. We searched MEDLINE for publications addressing system factors, human factors (using those terms plus medical errors, incident/event reporting and analysis); examined citations; and obtained books and reports synthesising the role of system and human factors in medical errors. We did not search the psychology or ergonomics literature. The final system-factor subcategories integrated multiple sources, which generally repeated similar types of factors or elaborated upon a particular type.7–9121326–33 Pilot work did not identify any missing factors.
For provider factors, we used pilot work to develop healthcare-specific subcategories for the “unsafe acts” in Reason’s book Human Error.7 Because pilot work suggested that these were evident in many reports but not described in sufficient detail to apply any taxonomy, we developed variables addressing this. For example, a narrative might state that a nurse omitted a medication dose without explaining why.
After our study began, The Joint Commission published the Patient Safety Event Taxonomy (PSET). The PSET and our taxonomy share several system and provider factor subcategories as well as similar supporting literature.34–36
Next, we located each paper incident report associated with study participants’ hospitalisations and had physician reviewers use implicit (ie, professional) judgement to assign the most detailed contributing factor subcategories that were evident in the narrative (typically <5 min per report). We did not corroborate reports with other data.
Due to liability concerns, one board-certified internist abstracted all reports. We randomly selected 10% (227 reports) for re-abstraction by this reviewer (intrarater reliability) and another 10% (234 reports) for abstraction by two secondary reviewers (interrater reliability), board-certified physicians (an internist and a pathologist) with patient safety research experience. Reviewers spent about 6 h being trained, abstracting sample reports, and receiving feedback; then they evaluated redacted copies lacking any identifiers.
Analysis involved the following steps. First, we determined the percentages of reports identifying any contributing factors and each major category of factors. Second, for each major category, we determined the percentages of reports that had factors classifiable at the broadest subcategory level or lower, and the mean number of such factors per report. Third, among the reports having classifiable factors, we examined percentages for each of the broadest subcategories.
We conducted analyses using SAS Version 9.1.3. The SURVEYFREQ and SURVEYMEANS procedures adjusted for the number of hospitalisations per participant.37 The MAGREE macro determined kappa statistics for major categories; statistical power was insufficient to estimate kappas for subcategories. We compared reports between the two hospitals using the Rao–Scott chi-square test.
Across both hospitals, 2284 incident reports addressed study participants; we evaluated 2228 paper reports because 56 (2.5%) were missing. Subsequent results are weighted for the number of hospitalisations per participant and represent the entire population discharged from the hospitals in 2001: 32 280 patients with 3875 reports.
Overall, at least one contributing factor was evident in 3095 reports (80%). Patient factors (table 1) and system factors (table 2) were each evident in 32% of reports, and agreement about this was moderate to substantial (patient factors intrarater kappa 0.84, interrater 0.72; system factors intrarater 0.62, interrater 0.54). Provider factors (table 3) were evident in 46% of reports, and agreement about this was moderate to substantial (intrarater kappa 0.73, interrater 0.41).38
The taxonomy’s subcategories applied to most reports having patient or system factors. However, subcategories could only be applied to 53% of reports in which provider factors were evident; insufficient detail in the narrative was the explanation for all but one report. The average number of classifiable factors per report was: 0.39 (95% CI 0.36 to 0.42) for patient factors, 0.44 (0.40 to 0.47) for system factors and 0.26 (0.24 to 0.29) for provider factors. Tables list the broadest subcategories.
Differences between the academic and community hospitals were modest: respectively, 29% and 39% of reports described patient factors (p<0.0001, χ2(1) = 22.1025), 33% and 31% described system factors (p = 0.40, χ2(1) = 0.7085), and 43% and 53% described provider factors (p<0.0001, χ2(1) = 19.5651). Respectively, 20% and 34% of provider factors were classifiable (p<0.0001, χ2(1) = 49.8124).
We identified patient, system and provider factors in more than 2000 paper incident reports submitted to established reporting systems in two US hospitals. Although most reports described at least one factor, fewer than half revealed each major type. Further, provider factors (errors, mistakes and violations), when present, were often insufficiently elucidated to apply even the broadest categories in our taxonomy, let alone to formulate recommendations for improving systems of care.
We evaluated hospital incident reports as stand-alone documents to determine how frequently their narratives contain detailed insights into contributing factors. To our knowledge, only one prior study has reported the frequency with which hospital incident reports identify contributing factors. In an academic hospital in the US, Tuttle et al examined a newly implemented electronic reporting system that used drop-down menus. Fifty per cent of electronic reports implicated “human factors” (not defined), and only 5% implicated system factors, far fewer than in our study. Subcategories were not described, so it is unclear whether or not detailed information about contributing factors was obtained.39
The Australian Incident Monitoring Study (AIMS) and a related study in intensive care units (AIMS-ICU) examined system and provider factors in incident reporting systems developed for research purposes. Using reports with checkboxes identifying specific system and provider factors, AIMS researchers collected 2000 incident reports from anaesthesiologists in several hospitals. The percentages of AIMS reports identifying system factors (26%) and provider factors (61%) were comparable with our findings.829 Using a similar study design but analysing both narratives and checkboxes, AIMS-ICU researchers identified 3.5 system and provider factors per report.30 The fact that the AIMS-ICU reports identified many more system and provider factors than our reports did suggests that substantial improvements to the quality of hospital incident reports may be feasible.
Barriers to disclosing system and provider factors in reports may resemble barriers to filing reports in the first place.16–19 If blame or liability could ensue, reporters might consciously withhold their observations about event causes from the incident reports they file. Training about what to include in reports is probably also important. Indeed, some nursing publications discourage divulging hospital or provider failures lest the reports be exposed during litigation,4041 a valid concern.1642 Consequently, event descriptions might also be strengthened by strategies for improving reporting rates, such as creating protections from blame, training providers how to complete reports and emphasising near-misses over adverse events.
Structured reporting forms, whether paper or electronic, may also be helpful. However, check boxes or pull-down menus should not replace detailed narratives, for two reasons. First, in aviation reporting systems, detailed narratives have clearly provided the principal source of actionable insights.6 Second, providers may be able to describe contributing factors without recognising them as such; this may explain why AIMS-ICU reports identified many more system and provider factors than AIMS reports did.82930
This study included only two hospitals, and data are from 2001; however, the reporting systems and practices appear typical for the US today (C Pham, personal communication, The RAND Corporation, Santa Monica, 2006).15 We did not assess reporting culture, which has improved in some hospitals since 20012324; future research should assess the relationship between culture and the disclosure of contributing factors.
Our taxonomy development methods did not include searches of psychology or ergonomics literature, or a formal process for synthesising the literature. However, pilot testing did not reveal missing categories, almost all factors that narratives described in detail were classifiable using our taxonomy, and it resembles a widely accepted taxonomy.34
Results rest on judgements by a single physician, and agreement with secondary reviewers was sometimes only moderate, which suggests that the disclosure of contributing factors could be somewhat better or worse than we report.38 Several factors likely affected reliability. As is typical in patient safety, reviewers made implicit judgements—these are necessary because reporters do not explicitly label contributing factors as such. Secondary reviewers examined redacted reports, which made understanding narratives more challenging, and they had less rating experience than the principal reviewer. These factors may explain why inter-rater reliability is lower than intrarater reliability. Reliably classifying events has also been challenging in landmark patient-safety studies, which have used only one reviewer per case,43 and found similar or worse agreement among reviewers.44–46
Hospital incident reports contain much less information about contributing factors than they could, which suggests that strategies for obtaining richer narratives are needed for incident reports to fullfill their potential to improve patient safety.
M F Shapiro, D Hedges, L Underdahl, B Browning, S Foss and K Fragola made this study possible. L Judge, C Yale and X Li provided assistance with data. D Adamson edited the final draft.
Funding: This study was supported by a National Research Service Award fellowship, a Developmental Center grant (HS11512) from the Agency for Healthcare Research and Quality, and the RAND Corporation.
Competing interests: None.
Ethics approval: Institutional Review Board approval was obtained.