Article Text

Download PDFPDF

Adverse-event-reporting practices by US hospitals: results of a national survey
  1. D O Farley1,
  2. A Haviland1,
  3. S Champagne2,
  4. A K Jain3,
  5. J B Battles4,
  6. W B Munier4,
  7. J M Loeb2
  1. 1
    RAND Corporation, Pittsburgh, Pennsylvania, USA
  2. 2
    The Joint Commission, Oakbrook Terrace, Illinois, USA
  3. 3
    RAND Corporation, Arlington, Virginia, USA
  4. 4
    Agency for Healthcare Research & Quality (AHRQ), Rockville, Maryland, USA
  1. Dr D O Farley, RAND Corporation, 4570 Fifth Avenue, Suite 600, Pittsburgh, PA 15213, USA; donna_farley{at}rand.org

Abstract

Context: Little is known about hospitals’ adverse-event-reporting systems, or how they use reported data to improve practices. This information is needed to assess effects of national patient-safety initiatives, including implementation of the Patient Safety and Quality Improvement Act of 2005 (PSQIA). This survey generated baseline information on the characteristics of hospital adverse-event-reporting systems and processes, for use in assessing progress in improvements to reporting.

Methods: The Adverse Event Reporting Survey, developed by Westat, was administered in September 2005 through January 2006, using a mixed-mode (mail/telephone) survey with a stratified random sample of 2050 non-federal US hospitals. Risk managers were the respondents. An 81% response rate was obtained, for a sample of 1652 completed surveys.

Results: Virtually all hospitals reported they have centralised adverse-event-reporting systems, although characteristics varied. Scores on four performance indexes suggest that only 32% of hospitals have established environments that support reporting, only 13% have broad staff involvement in reporting adverse events, and 20–21% fully distribute and consider summary reports on identified events. Because survey responses are self-reported by risk managers, these may be optimistic assessments of hospital performance.

Conclusions: Survey findings document the current status of hospital adverse-event-reporting systems and point to needed improvements in reporting processes. PSQIA liability protections for hospitals reporting data to patient-safety organisations should also help stimulate improvements in hospitals’ internal reporting processes. Other mechanisms that encourage hospitals to strengthen their reporting systems, for example, strong patient-safety programmes, also would be useful.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

In its report, To Err Is Human: Building a Safer Health System, the Institute of Medicine highlighted the importance of adverse-event-reporting as a foundation for patient-safety improvement and identified the fragmented nature of reporting as a significant barrier to achieving improvements.1 Despite growing activity to improve patient-safety reporting and practices, little is documented systematically about the extent to which individual healthcare organisations have systems for reporting errors and adverse events, or how they use the reported data for actions to implement safer practices.2 Errors are defined as actions or inactions that lead to deviations from intentions or expectations. Adverse events are occurrences during clinical care that result in physical or psychological harm to a patient or harm to the mission of the organisation.

This paper reports results of a national survey of hospitals that characterises the extent to which US hospitals have adverse-event reporting systems and how they use them. The survey was administered collaboratively by the RAND Corporation and the Joint Commission.

Because standardised data on reported adverse events have been lacking, it has not been possible to detect and assess safety issues at the national level or to track trends over time.13 With enactment of the Patient Safety and Quality Improvement Act of 2005 (PSQIA), the US Congress established a structure and process intended to reduce the fragmentation of information on reported patient-safety events and issues.4 The PSQIA provides for national certification of patient-safety organisations (PSOs), to which healthcare providers can report data and other patient-safety information, and it establishes confidentiality and protection from legal discovery for information reported by participating providers.

No formal models for hospital adverse-event-reporting systems have been published, but many sources identify the essential components of an effective system. A hospital’s reporting system should be one element of a cohesive patient-safety programme that includes identification of errors and occurrences through reporting, and establishment of patient-safety infrastructure, processes and climate that support reduction in adverse events.59 A reporting system should be able to capture both adverse events and near misses, define adverse events precisely to prevent under-reporting or misperceptions, and link errors to patient and team characteristics.61011 The system also should be linked to organisational leaders who can act on reports.612 A broad range of staff throughout the hospital should participate in reporting, with confidentiality or anonymity provided for those who report occurrences—preferably confidentiality to allow discussion of occurrences with the reporting persons.681314

These principles also apply to external adverse-event-reporting systems. The World Health Organization established guidelines that identify the characteristics of successful adverse-event-reporting systems.15 Such systems should be non-punitive, confidential, independent, analytically capable, systems-oriented and responsive in developing solutions. Several countries have national reporting systems with many of these features. England and Wales NHS, The Netherlands, Slovenia and Australia have voluntary systems, and the Czech Republic, Denmark, Ireland and Sweden have mandatory systems.91517

Funded by the Agency for Healthcare Research and Quality (AHRQ), the RAND Corporation and the Joint Commission administered the Hospital Adverse Event Reporting Survey (AERS) for a national sample of hospitals, to establish an information base on the characteristics and use of reporting systems operated by US hospitals. The goal was to establish estimates of the percentage of hospitals that have such systems, the status of reporting practices, and how information on reported occurrences is disseminated and used for practice improvement. The survey results would establish baseline information for two policy-related purposes—to enable tracking of trends in improvements for adverse-event-reporting practices across the country and to assess effects of implementation of the PSQIA on hospitals’ internal reporting processes.

To reduce adverse events for hospital patients, hospitals need to have both effective reporting systems that identify risks and hazards in their systems and effective performance improvement processes that act on reported information. This paper reports survey results on the first of these steps, estimating the extent to which hospitals currently collect and disseminate the occurrence data needed to inform effective performance improvement. Drawing from the published information described above about the features of effective reporting systems, we identified four system components that should be in place for effective operation of hospital adverse-event reporting, which were used to frame our analysis:

  • a supportive environment that protects the privacy of staff who report occurrences;

  • broad reporting to the system by a range of types of staff;

  • timely distribution of summary reports that document reported occurrences for use in action strategies to prevent future adverse events from occurring; and

  • senior-level review and discussion of summary reports by key hospital departments and committees for policy decisions and development of action strategies.

DESIGN AND METHODS

Westat developed and pilot-tested the AERS questionnaire for the US DHHS Quality Interagency Coordination Task Force, including assessment of the need to collect data from one or more types of personnel to obtain valid and reliable results.18 Questions covered in the survey included whether hospitals collect information on adverse events, what information is collected, who reports occurrences, how their privacy is protected and uses of the data collected.

In testing the survey, Westat did cognitive interviews with risk managers and department heads, which guided terminology, response options and several aspects of survey design. A draft instrument was reviewed by American Hospital Association staff, resulting in substantive revisions. Test results suggested that respondents understood the questions being asked, and the questions obtained the desired information.

Based on field test data collected from hospital risk managers and up to six department heads (eg, nursing, medicine, laboratory), Westat found that most of the adverse-event reports are sent to the risk manager, although many are not.18 Westat concluded that a survey of the risk managers could

provide a relatively complete picture of adverse event-reporting systems in hospitals, … focusing on the main reporting vehicle for the hospital, describing reporting for the majority of adverse events, … [and] would also give a picture of the types of events that are not reported to their systems.

Where more detailed information on reporting patterns and practices might be needed, these results can be supplemented with departmental manager surveys.

Our goal was to understand the status of hospitals’ main vehicles for reporting adverse events. Therefore, based on Westat’s pilot test results, the AERS survey questionnaire for risk managers was used with minor modifications made to improve clarity and data completeness. Changes made to a small number of questions on the Westat survey included editing changes to clarify terminology or wording, adding response options to obtain more complete data, reordering response options to improve logic flow and adding open-ended response options for two items. In addition, one question was deleted that collected duplicative information, and two new questions were added about whether the hospital had a patient-safety programme.

We administered the AERS to risk managers at a nationally representative sample of 2050 non-federal hospitals in the US in September 2005 to January 2006. The hospital risk manager to be surveyed was identified by initial phone contact to each hospital in the sample. The survey was mailed to participants, followed by telephone follow-up interviews for those who did not complete the mail survey.

The sampling frame consisted of 5517 non-federal hospitals in the 2003 database of the American Hospital Association, excluding those in southern portions of Louisiana and Mississippi. Hurricane Katrina occurred at the time we went into the field for survey data collection, affecting hospitals in those areas. We dropped 67 hospitals in southern Louisiana and Mississippi from our original sample and replaced them with additional randomly sampled hospitals in the same strata. (Hospitals dropped were those in zip codes beginning with 700–708 and 390–397.) The sample is thus representative of non-federal hospitals nationally excluding these regions. The sample was stratified by Joint Commission accreditation status, hospital ownership and staffed bed size, which also yielded good representation on teaching, urban/rural and multihospital system status. (The Joint Commission performs voluntary accreditations for hospitals and other healthcare organisations across the US, and Joint Commission accreditation has become a standard for participation in many health-insurance programmes.)

We established indexes as summary measures of hospitals’ performance on the four components identified for an effective adverse-event-reporting system. Each index was based on data from relevant survey questions (table 1). For the components on supportive environment and on reporting by a range of staff, we established indexes based on two survey questions each. For the supportive environment component, a hospital was given one point if it provides for anonymous reporting for all reporters and one point if it always keeps identity private for reporters who identify themselves (on three-point scales of all, some, none). For the index on range of staff reporting, a hospital was given one point if it reported that at least some of its reports came from physicians, and one point if it reported that at least some reports were submitted by technicians, therapists, pharmacy staff or other staff (on five-point scales of all to none). Reporting by nurses was not included because survey results showed that nurses were the predominant reporters for a large share of the hospitals.

Table 1 Composition of hospital reporting performance indexes

The other two indexes address the distribution and discussion of summary reports on reported occurrences within the hospital. The index for timely distribution of reports is based on responses to three survey questions. A hospital was given one point if it distributes summary reports within the hospital (yes/no response), one point if it produces summary reports on a monthly basis or more frequently (from a four-point scale of weekly, monthly, quarterly and annually), and one point if reports are distributed within 2 weeks after the end of the reporting period (from a five-point scale of less than 1 week to 2 months or more).

The index for senior-level review and discussion of reports by key hospital departments and committees is based on responses to two survey questions. A hospital was given one point if it always provides reports to all of three key departments: hospital administration, nursing department and medical administration (five-point scale of always to never, conditional on having the department). It also was given one point if it reported that adverse events are discussed at both the hospital board or board committee and the medical executive committee (yes/no response, conditional on having the committee).

Non-response weights were created to realign the sample characteristics with the target population, and these weights were used in all the analyses performed. We first calculated descriptive statistics of the sample characteristics and estimated distributions of hospitals on the performance indexes. Then, we performed descriptive analyses for individual components of the indexes, and we estimated standard logistic regression models to assess how hospital characteristics were associated with specific aspects of reporting performance.

Hospital characteristics included in these analyses were accreditation status, bed size, ownership, teaching, rural location, existence of a patient-safety programme and status as a critical access hospital (CAH). Because CAHs differ from other hospitals by their smaller size and more limited services, they may differ in their adverse-event-reporting systems and practices. To qualify for designation as a CAH, a hospital has to (1) be in a state with a State Flex Program, (2) be in a rural area or be treated as rural under a special CAH provision, (3) provide 24-hour emergency care services using either on-site or on-call staff, (4) provide no more than 25 inpatient beds, (5) have an average length of stay of 96 h or less, and (6) be either more than 35 miles from a hospital or another CAH or more than 15 miles in areas with mountainous terrain or only secondary roads (other exceptions provided). Risk managers were asked in the survey if the hospital had in place a comprehensive patient-safety programme. We did not attempt to obtain additional detail on the characteristics of these programmes, because they are complex to profile effectively,19 and the additional survey items required to do so would increase respondent burden.

RESULTS

Characteristics of hospitals surveyed

Of the 2050 hospitals in the sample, 1652 completed the survey, for an overall survey response rate of 81%. The characteristics of the 1652 hospitals that completed surveys reflect those of the larger hospital population, as shown by the small differences between the unweighted and weighted distributions of hospitals in the sample (table 2). Therefore, although these weights are used in the analyses presented here, they have a minor effect on the results.

Table 2 Characteristics of the hospitals surveyed

The survey sample included the full range of hospital types. Of the total sample, 63% were general medical-surgical hospitals, and 19% were CAHs (320 hospitals). Only 72% of the hospitals in the sample were Joint Commission-accredited. Of those that were not accredited (466 hospitals), more than half (57%) were CAHs. The remaining 43% of the hospitals without accreditation tended to be rural (50%), small in size (70% have fewer than 75 beds) or specialty hospitals (32%). According to the Joint Commission staff, many small, rural hospitals, including CAHs, choose not to seek accreditation because the accreditation survey process is too costly, they do not need the competitive edge of accreditation because there is little competition in rural areas, and they feel the scope of the Joint Commission standards exceed the range of services that they provide. For the full survey sample, 87% reported they have a patient-safety programme.

Types of reporting systems

All but a small percentage of the risk managers reported that their hospitals had a centralised adverse-event-reporting system (table 3), with the types of systems differing between the non-CAHs and CAHs. An estimated 75% of the non-CAHs reported they used both paper and computer systems, and another 14.2% used computer-only systems, whereas 39.5% of the CAHs reported using paper-only systems.

Table 3 Percentage of hospitals that have reporting systems

We found strong consistency among hospitals regarding many of the collected data elements. Virtually all the hospitals’ systems had the capability to record type, place and time of occurrences, and all but a small percentage can document patient demographics, needed follow-up treatment, action taken and personnel involved (table 4). However, only 82% of the hospitals reported that their systems could collect data on the patient’s condition before and after an occurrence, and only 79% collected data on severity of patient harm.

Table 4 Types of data that hospital adverse-event-reporting systems are designed to collect

Features of a well-performing hospital adverse-event-reporting process

The four performance indexes summarise the current status of hospitals’ reporting systems. Only small percentages of hospitals had the maximum score for each of the four indexes (a supportive environment, types of staff reporting, timely reporting and reporting to departments or committees) (figs 1, 2). For the supportive environment and timely reporting indexes, hospitals were somewhat evenly distributed across the scores.

Figure 1 Distribution of hospitals for supportive environment for reporting and types of staff reporting. *n = 1578 with 5 per cent missing. **n = 1518 with 8 per cent missing.
Figure 2 Distribution of hospitals for timely reporting and reporting to departments and committees. *n = 1522 with 8 per cent missing. **n = 1267 with 23 per cent missing.

For the index on type of staff reporting, 69% of hospitals had index scores of one point, suggesting that occurrences in their hospitals were likely to be reported by either physicians or other staff, but not both. A similar pattern is found for reporting to high-level departments and committees, indicating that their occurrence reports were being considered by either internal departments or committees, but not both. There were 23% missing data for use of summary reports with hospital departments and committees, which may indicate that actual performance is less positive than indicated by the index scores.

Supportive hospital environment for reporting

Risk managers were asked if hospital policy provided for anonymous reporting or keeping reporter’s identity private if reported non-anonymously. An estimated 47 (SD 2.4)% of the hospitals always allow for anonymous reporting, and 29 (2.2)% never allow for it. An estimated 8 (0.8)% of hospitals overall never keep reporters’ identities private once identities are known. CAHs are more likely than other hospitals to keep reporters’ identities private (fig 3) (χ2, p<0.001).

Figure 3 Distribution of hospitals by policies for anonymous reporting and keeping reporting person’s identity private. CAH, critical access hospital.

Logistic regression models assessed which hospital characteristics were associated with each of the two supportive environment components (table 5). A hospital was more likely to both allow anonymous reporting and keep reporters’ identities private if it had a computer-only reporting system or had a patient-safety programme.

Table 5 Factors associated with hospital privacy policies for those who report adverse events

Small hospitals, for-profit hospitals and government-owned hospitals were less likely to always allow anonymous reporting, but these characteristics did not affect reporters’ privacy protection. CAHs were more likely to keep identity private, but this status did not affect hospital policies on anonymous reporting. Teaching hospitals were more likely to always allow anonymous reporting, but were less likely to keep reporters’ identities private.

Types of staff reporting adverse events

The risk managers were asked to estimate the shares of staff who submitted reports to their systems, with responses of all, most, some, a few or none for each staff type. Almost all risk managers reported that nursing staff submit all or most occurrence reports (table 6). Pharmacy staff, technicians and therapists were identified by more than half the hospitals as submitting some of the occurrence reports. More than 80% of the hospitals estimated that attending MDs submit only a few of the reports.

Table 6 Types of staff most likely to submit reports of adverse events

We estimated a logistic regression model to assess which hospital characteristics were associated with the extent to which attending physicians submit reports. The dichotomous dependent variable for the models was given a value = 1 if a hospital risk manager responded “some,” “most” or “all” to the survey question about the share of adverse-event reports submitted by physicians. We found that attending physicians at larger hospitals and at hospitals with patient-safety programmes were more likely to submit occurrence reports to an adverse-event-reporting system (table 7).

Table 7 Factors associated with reporting of adverse events by physicians

Distribution of reports regarding adverse events reported into the system

Virtually all the hospitals said they produce summary reports with occurrence data, but only 71 (2.3)% of them distribute these reports within the hospital (table 8). For those hospitals that distribute reports, all but a small percentage distribute them on either a monthly or quarterly basis. The hospitals varied in how long it took them to produce reports after the end of a reporting period, ranging from 2 weeks to longer than 1 month. The CAHs differed from the other hospitals, with a smaller percentage of CAHs reporting they distributed reports (62% of CAHs vs 73% of other hospitals (p<0.001)).

Table 8 Distribution of hospitals by dissemination of adverse-event report information within the hospitals

Discussion of adverse-event reports with key hospital committees and departments

Risk managers were asked whether adverse events were discussed in specific committees and the frequency with which reports were provided to specific hospital departments. Our analysis focused on reporting to the three key hospital departments of senior administration, nursing and medical administration. We also identified two committees as important ones to receive and discuss information about adverse events—the hospital board or board committee and the medical executive committee. Another important high-level committee is the senior-management committee. We did not include this committee in the index measure because a large percentage of respondents indicated that they did not have this committee, resulting in a large amount of missing data for this question.

Our estimates suggest that only 25 (2.2)% of all hospitals distribute adverse-event reports to all three of the key departments. In a logistic regression model (not presented), hospital characteristics explained little of the variance across hospitals in the likelihood of their distributing occurrence reports to all three departments (r2 = 0.02). The only significant findings were that CAHs were significantly less likely to distribute reports to these departments (OR = 0.64 (0.26)), as were hospitals with computer-only reporting systems (OR = 0.56 (0.21)).

Reported adverse events were discussed with both the board and medical executive committees by 73 (2.3)% of the hospitals (table 9). Logistic regression results suggest that for-profit hospitals were more likely than not-for-profit hospitals to discuss adverse events with both committees, and government-owned hospitals were less likely to do so. Hospitals with patient-safety programmes were more likely to discuss adverse events with these committees, whereas CAHs, teaching hospitals and hospitals with computer-only reporting systems were less likely to do so.

Table 9 Factors associated with discussion of adverse events with the hospital board or committees and the medical executive committee

DISCUSSION

As patient safety became a priority for hospitals in many countries, there was general awareness among providers and policy makers that hospitals’ adverse-event-reporting activities needed strengthening, but data were not available to confirm the need or guide action. These survey results document the need to strengthen reporting processes in US hospitals and also highlight priorities for action, and they establish baseline data for future monitoring of improvement progress.

The large percentage of US hospitals that reported having centralised adverse-event-reporting systems was a positive finding, although the nature of their systems varied widely. Our results suggest that hospitals’ processes for reporting adverse events and acting upon this information need to be strengthened. Small percentages of hospitals scored highly on each of the four performance indexes, indicating that many hospitals had not established environments that protect privacy to support reporting, were incomplete in reporting adverse events, or were not fully distributing and working with summary reports on events identified in their systems.

These survey results profile strengths and weakness of existing US hospital adverse-event-reporting systems, highlighting where actions are needed to improve them. A variety of factors can affect the usability of these systems, however, which cannot be captured readily in a national survey of this type. For example, reporting performance could be affected by the technical integrity of the system, adequacy of staff training on reporting methods or consistency in employing effective reporting processes. Additional, more detailed assessments of hospital reporting systems are advisable, to identify actionable issues that can be corrected through performance-improvement interventions.

The results of this national survey raise questions regarding how the experiences of US hospitals compare with those in other countries, and how they can learn from each other. The specifics of the reporting status for US hospitals may differ from those in other countries. However, many issues likely are shared—for example, the need for broader reporting by physicians and variation in dissemination of information on reported events. In particular, comparisons with experiences of hospitals in countries with national adverse-event reporting could reveal possible effects of those external systems on hospitals’ internal reporting systems.1520

Our finding of low participation in adverse-event reporting by physicians also has been found in other studies in the US and other countries.122225 Reasons identified for physician reluctance to participate in reporting include risk of liability exposure or professional embarrassment, burdensome reporting methods, time required for reporting, perceptions of the clinical import of adverse events and lack of sense of ownership in the process.92628 Physician participation may be higher than observed, however, if they are asking other staff (eg, nurses) to report identified adverse events, rather than doing it themselves. More work is needed to clarify these issues and seek solutions to enhance physician reporting.

Other research has found that hospital leaders are concerned that external adverse-event reporting could increase their legal liability and increase lawsuits,21 which might also diminish their commitment to internal reporting. The implementation of PSOs, under PSQIA provisions, might help to alleviate these concerns and stimulate internal reporting activities by hospitals.

The wide variation in hospitals’ dissemination of summary reports generated by adverse-event-reporting systems raises questions about the effectiveness of follow-up by hospitals on reported occurrences, especially the finding that almost 30% of the hospitals that generate summary reports state that they do not distribute them at all within the hospital. Such issues limit the information available to hospital decision-makers about patient-safety issues, which in turn reduces the likelihood that hospitals will undertake actions to improve practices.

Hospitals with established patient-safety programmes performed better in a variety of aspects of adverse-event-reporting processes. Our findings are consistent with other research showing wide variation across hospitals in the adoption of patient-safety systems.19

The survey finding that many hospitals did not have supportive reporting environments is consistent with data from the Hospital Survey of Patient Safety Culture (HSOPS) benchmark database. The “non-punitive response to error” composite had the lowest average percentage positive response (43% and 44% for 2007 and 2008, respectively) among the hospitals submitting HSOPS data to the database. This composite addresses the extent to which staff feel that their mistakes and event reports are not held against them and that mistakes are not kept in their personnel file.2930

As stated above, this paper focuses on the first of two steps required to reduce adverse events for hospital patients—the nature and use of hospitals’ internal adverse-event-reporting systems. The survey also collected data on the types of actions taken by hospitals in response to reported events (the second step), which are not reported here. Hospitals varied widely in the extent to which they used information on reported events for a variety of actions, for example, analysis of root causes, training of staff or performance-improvement actions. Additional work is needed to document the effectiveness of hospital actions in this phase of the process, and how they are influenced by the usability of the reporting systems used.

Several study limitations also merit consideration. Because the survey data are self-reported by hospital risk managers, often based solely on their perceptions without supporting data, these results may be optimistic estimates of the performance of hospital reporting systems. Due to the effects of hurricane Katrina, our final sample is representative of hospitals in all of the United States except southern Louisiana and southern Mississippi, rather than the entire country. Given the size of the sample and the consistency of responses, it is not likely that results would differ for a full national sample.

CONCLUSIONS

Findings from this hospital adverse-event-reporting survey document the current status of reporting systems, and point to several needed improvements in hospitals’ processes for reporting and acting upon identified occurrences. These results provide baseline data for future assessment of trends for changes in these reporting systems. PSQIA protections for hospitals reporting to PSOs could encourage such reporting by alleviating hospitals’ concerns about liability exposure, and could stimulate improvements in hospitals’ internal reporting systems. Support of these activities through establishment of other mechanisms that encourage hospitals to strengthen their reporting systems also would be useful.

Acknowledgments

We thank the risk managers at the hospitals in our sample for their willing participation in the survey. We also thank the staff at RAND Survey Research Group (SRG) and the University of Illinois Survey Research Laboratory (SRL) for administering the survey data-collection efforts, under the leadership of C Pham at RAND SRG, and J Ronco and R Hazen at SRL. The contributions of P Goldschmidt during preparation for survey administration are appreciated. This study was conducted with support from the Agency for Healthcare Research and Quality, US Department of Health and Human Services.

REFERENCES

Footnotes

  • Competing interests: None.

  • Ethics approval: Ethics approval was provided by the Human Subjects Protection Committee of the RAND Corporation.

  • See Commentary, p 400

Linked Articles