Article Text
Abstract
Background: US hospitals have had voluntary incident reporting systems for many years, but the effectiveness of these systems is unknown. To facilitate substantial improvements in patient safety, the systems should capture incidents reflecting the spectrum of adverse events that are known to occur in hospitals.
Objective: To characterise the incidents from established voluntary hospital reporting systems.
Design: Observational study examining about 1000 reports of hospitalised patients at each of two hospitals.
Patients and setting: 16 575 randomly selected patients from an academic and a community hospital in the US in 2001.
Main outcome measures: Rates of incidents reported per hospitalised patient and characteristics of reported incidents.
Results: 9% of patients had at least one reported incident; 17 incidents were reported per 1000 patient-days in hospital. Nurses filed 89% of reports, physicians 1.9% and other providers 8.9%. The most common types were medication incidents (29%), falls (14%), operative incidents (15%) and miscellaneous incidents (16%); 59% seemed preventable and preventability was not clear for 32%. Among the potentially preventable incidents, 43% involved nurses, 16% physicians and 19% other types of providers. Qualitative examination of reports indicated that very few involved prescribing errors or high-risk procedures.
Conclusions: Hospital reporting systems receive many reports, but capture a spectrum of incidents that differs from the adverse events known to occur in hospitals, thereby substantially underdetecting physician incidents, particularly those involving operations, high-risk procedures and prescribing errors. Increasing the reporting of physician incidents will be essential to enhance the effectiveness of hospital reporting systems; therefore, barriers to reporting such incidents must be minimised.
Statistics from Altmetric.com
Voluntary incident reporting in hospitals is a centrepiece of national patient safety policies in the US, the UK and Australia, because this practice has improved safety in other high-risk industries.1–5 However, although voluntary incident reporting systems have long existed in the hospitals of the US,4,6,7 their effectiveness remains unclear.4,7,8
In US hospitals, reporting systems were developed to prevent and prepare for litigation, whereas in other industries reporting systems were designed to improve safety.6,9–11 In around 1965, risk managers adapted the critical incident technique to reduce events that could lead to malpractice claims against hospitals, particularly medication errors, falls, patient misidentification and retained foreign bodies after surgery.6,12 They also started using the reports as alerts to possible claims.10,11 Hospital reporting systems capture few of the errors and adverse events that are identifiable by other means13–15; nevertheless, some collect several thousand reports per year.14
Although published literature emphasises increasing reporting, this seems necessary but not sufficient to make these systems effective. To prevent serious harm to patients, the systems should capture incidents reflecting actual or potential risks of such harm.16 Large-scale studies using medical-record review have examined disabling and fatal adverse events that occurred during hospital care in the US, the UK and Australia.17–20 Assuming that the studies are accurate, reported incidents should be similar to the adverse events that occurred in the same country. In US hospitals, incidents should resemble adverse events from the Harvard and Utah/Colorado Medical Practice Studies, nearly all of which occurred during hospitalisation.19,20
In this study, our objectives were to determine how frequently incidents are reported in two US hospitals and to characterise the incidents in terms of type, preventability, location, harm, the types of providers reporting and the types of providers involved.
These two reporting systems are typical for US hospitals in terms of their purpose, functioning and definition of reportable incidents (Pham C, RAND Corporation, personal communication, March 21, 2006).6,8 Reporting practices and safety culture have changed only modestly in US hospitals in recent years,21 and most hospitals still use paper reports. In 2002, our study hospitals introduced electronic reporting. As this was not representative of how most systems operate, we studied the conditions of 2001.
METHODS
We examined a representative sample of about 1000 incident reports of inpatients at each hospital.
Setting
The study hospitals were an academic tertiary referral centre (668 beds) and an affiliated community hospital (363 beds) in an urban area in Southern California.
In 2001, neither of the hospitals had changed reporting practices recently nor formally evaluated its patient safety culture. Risk/quality managers received 230 reports per month at the academic hospital and 100 at the community hospital. Reporting was voluntary but not anonymous. Reportable incidents included, “Any occurrence that is not consistent with the routine operation of the medical center and that potentially may, or actually did, result in injury, harm or loss to any patient, or visitor of the medical center.”
Incident reports were one-page paper forms listing: patient identifiers, age, diagnosis, admission date; event date, time and location; and reporter name and profession. Structured questions addressed event type, harm/injury rating, falls and medication errors. Incident descriptions were entered in a 2.5-inch× 4.25-inch area plus additional pages when desired. Unit nurse managers reviewed reports, occasionally added comments and forwarded them to risk/quality managers, who summarised them in incident-report databases. For events involving legal issues, providers sometimes called risk/quality mangers without filing reports. We excluded pharmacy systems documenting medication order changes, because they functioned separately and lacked detailed descriptions.
Subjects
To obtain a representative sample of reports, we (1) obtained discharge and incident-report databases for 2001, (2) linked them to determine which patients had reports and (3) sampled enough number of patients to obtain about 1000 reports at each hospital. This required all 9850 patients at the community hospital. At the academic hospital, we used a SAS random-number generator to select 6725 (30%) of 22 430 patients. Because discharge databases listed hospitalisations, we oversampled patients with multiple discharges and, therefore, weighted subsequent analyses to adjust for this.
The institutional review board for both hospitals approved the study and did not require informed consent.
Data collection
Reports were ineligible if they addressed only visitors or staff, or care not associated with hospitalisation. We abstracted study variables from information reporters provided in original reports; we did not corroborate reports with medical records. When multiple reports described an incident, we assessed them individually and in summary.
For reasons of confidentiality, one board-certified internist (TKN) authorised by the risk/quality management departments abstracted all reports. We randomly selected 10% for re-abstraction by this reviewer (intra-rater reliability) and another 10% for abstraction by two secondary reviewers (inter-rater reliability), a board-certified internist (DSB) and a board-certified pathologist (LHH). Secondary reviewers evaluated redacted copies lacking dates, times, and patient and provider identifiers. The principal reviewer developed data collection methods with the secondary reviewers, who had substantial patient-safety research experience.
Study variables
We transcribed the type of provider who was reporting, location and harm/injury rating from reports. Physician reviewers judged risks of medical versus social harm, event types, preventability and, for potentially preventable incidents, provider types that seemed to be involved. We used Harvard Medical Practice Study categories for event type and location.22 We used a three-point preventability scale (raters could not distinguish two intermediate categories from a common four-point scale).23 Provider types included nurses, physicians, other providers and unknown providers.
Analytical methods
First, we computed reported incidents per 1000 patient-days, percentages of subjects and hospitalisations associated with a report, and types of providers reporting.
Next, we examined harm/injury and risk-of-harm ratings, analysed event types across preventability categories, and qualitatively examined incidents in common categories. We examined locations and, for potentially preventable incidents, types of providers involved.
We conducted analyses using SAS V. 9.1.3. The MAGREE macro determined κ statistics, and Surveyfreq adjusted for the number of hospitalisations per subject (SAS/STAT software, Version 9.1.3 of the SAS System for Windows. Copyright © 2002 SAS Institute Inc. Cary, NC, USA.). We compared events across the two hospitals using the χ2 test and reported major differences.
RESULTS
There were 2284 eligible reports addressing study subjects (fig 1). We evaluated 2228 paper reports, as 56 that seemed eligible according to the incident-report database were missing. Thirty-nine incidents had multiple reports (40 additional reports in total). There were 2244 unique incidents (2188 paper reports plus 56 missing reports).
To adjust for our sampling strategy, the subsequent results are weighted to represent all patients discharged from study hospitals in 2001. These 32 280 patients had an estimated 3911 incidents described in 3981 reports (3875, excluding missing reports). There were 17 incidents per 1000 patient-days. In all, 9% of patients and 8% of hospitalisations had at least one report. Nurses filed 3407 (88%), physicians 73 (1.9%), other providers 346 (8.9%) and unknown providers 49 (1.3%).
Table 1 lists harm/injury and risk-of-harm ratings. Harm/injury ratings were qualitatively variable and often missing. Most reports described medical rather than social risks (intra-rater, κ 0.74; inter-rater, 0.48).
Table 2 lists event type by preventability and Table 3 provides examples. Medication events, operative events, miscellaneous events, falls and procedural events (intra-rater, κ 0.92; inter-rater, 0.86) were most common (fig 2). Of them, 59% were preventable, 9% were not and 32% were of indeterminate preventability (intra-rater, κ 0.86; inter-rater, 0.68).
Half the incidents (1859) occurred in floor units, 21% (797) in intensive care units and 14% (544) in operating rooms; < 5% involved other locations.
Providers involved in potentially preventable incidents included: nurses, 43% (1507); physicians, 16% (556); and other providers, 19% (657); (intra-rater, κ 0.72–0.89; inter-rater, 0.36–0.61). Multiple types were involved in 239 (7%) and no type was identified for 993 (28%) .
Rates of reported incidents were similar between the hospitals; differences in other variables were modest. At the community hospital, physicians reported less often (1% vs 2%, p<0.001) but were involved more often (18% vs 15%; p<0.001). Incidents involved operations (10% vs 16%) and drugs (22% vs 32%) less often, and miscellaneous incidents were more common (24% vs 12%; (p<0.001).
DISCUSSION
Summary
Our study documents the rates and types of incidents captured by two reporting systems that are typical among US hospitals. Of the hospitalisations, 8% involved incidents that providers, mainly nurses, found troubling enough to report. Consistent with historical uses,6 reports emphasised drug administration errors, falls, incorrect needle counts, identification issues and cardiac arrests. Two-thirds occurred in patient rooms. Almost 60% were preventable and, for most of these, non-physician providers seemed to be involved.
CONTEXT
To date, under-reporting has been recognised as the salient limitation of the hospital reporting systems.8,13–15 We found, however, that reported incidents were more than twice as prevalent as adverse events in the Harvard and Utah/Colorado studies.20,24 Moreover, a recent study of electronic reporting documented twice the rates we observed.25 Perhaps a key issue is not whether hospital reporting systems can collect many incidents, but whether the incidents collected reflect the greatest threats to patient safety.
Hospital reporting systems seem to capture a different spectrum of events than the Harvard and Utah/Colorado studies established as priorities. Reviewing hospital medical records, those studies found that injuries involving operations, procedures and drugs were prevalent, but injuries due to falls were rare. Of the adverse events, 40% occurred in operating rooms, 25% in floor units, and 3% in intensive care.19,20 In comparison, the reporting systems that we studied identified more falls, drug errors and miscellaneous events occurring in patient rooms, but far fewer incidents involving surgery. Electronic systems capture similar incidents.25
Our qualitative observations also suggest that hospital reporting systems collect many reports of some subtypes while missing other important subtypes. For example, many procedural incidents were infiltrated peripheral intravenous lines; only a handful involved high-risk procedures such as endoscopy, bronchoscopy or central line placement. Many drug incidents were omitted doses; few involved incorrect drugs or doses, which were common in the Utah/Colorado study.20
The reports probably emphasise issues such as peripheral line care and drug administration, because nurses file most reports and focus on issues they know well. However, in the Utah/Colorado study, physicians were responsible for 94% of all events (including non-preventable ones), nurses for 2% and other providers for 3%.26 The Harvard study also focused on physician events.22 Thus, the fact that only 16% of reports in our study address physician care represents a major limitation of hospital reporting systems.
Reports probably underemphasise physician care for two reasons. First, reporting systems were developed to minimise litigation against hospitals and their employees.6 Hospitals have instead addressed physician care via peer review, credentialing, and morbidity and mortality conferences.27,28 Second, physicians are probably better than other providers in identifying physician errors and physician reporting is minimal, in both our studies and in one of electronic reporting.25
Increasing the reporting of physician incidents will not assure that hospital reporting systems are effective, but without it they cannot be. Hospitals, physicians and, when necessary, policymakers, should collaboratively resolve the structural and cultural barriers to reporting physician incidents. First, shifting the purpose of reporting from preventing litigation against hospitals to improving safety would make capturing physician incidents a higher priority. Second, barriers to reporting by physicians should be mitigated. Reports can be disclosed in litigation in some states,29–31 and hospitals sometimes limit employees to reporting. Cultural barriers perceived by physicians include: time and effort, lack of confidentiality, lack of feedback, poor understanding of what to report or how, fear of blame and reprisal, and doubts about the value of reporting.32–34
LIMITATIONS
Our study included two hospitals within one geographical area. Nevertheless, many patient safety studies address one hospital, and we included both academic and community settings.
Liability concerns limited report review methods; the results reflect the judgments of one physician with full access to the reports. Agreement with two secondary reviewers was generally moderate to substantial, although occasionally lower.35 However, the Utah/Colorado study also used one reviewer per case,20 and prior studies have documented similar to worse agreement among three reviewers.36–38
CONCLUSION
Our findings identify important limitations to incident reporting systems in US hospitals. The spectrum of incidents captured differs from the adverse events that are known to occur, and physician incidents are markedly under-represented. Increasing the reporting of physician incidents will be essential to enhance the effectiveness of hospital-reporting systems; therefore, barriers to reporting such incidents must be minimised.
Acknowledgments
We thank Martin F. Shapiro, Diana Hedges, Louise Underdahl, Barbara Browning, Sarah Foss and Kathleen Fragola who made this study possible. We also thank Lisa Judge who worked on report abstraction; Coralee Yale and Xuesheng Li who contributed their time to provide discharge data; and Mary Vaiana who edited the final draft.
REFERENCES
Footnotes
-
Funding: This study was supported by a National Research Service Award fellowship, a Developmental Center grant (HS11512) from the Agency for Healthcare Research and Quality and the RAND Corporation.
-
Competing interests: None.
Linked Articles
- Quality Lines