Article Text
Abstract
Context Information is needed on the performance of hospitals' adverse-event reporting systems and the effects of national patient-safety initiatives, including the Patient Safety and Quality Improvement Act (PSQIA) of 2005. Results are presented of a 2009 survey of a sample of non-federal US hospitals and changes between 2005 and 2009 are examined.
Methods The Adverse Event Reporting System survey was fielded in 2005 and 2009 using a mixed-mode design with stratified random samples of non-federal US hospitals; risk managers were respondents. Response rates were 81% in 2005 and 79% in 2009.
Results Virtually all hospitals reported they had centralised adverse-event-reporting systems. However, scores on four performance indexes suggested that hospitals have not effectively implemented key components of reporting systems. Average index scores improved somewhat between 2005 and 2009 for supportive environment (0.7 increase; p<0.05) and types of staff reporting (0.08 increase; p<0.001). Average scores did not change for timely distribution of event reports or discussion with key departments and committees. Some within-hospital inconsistencies in responses between 2005 and 2009 were found. These self-reported responses may be optimistic assessments of hospital performance.
Conclusions The 2009 survey confirmed improvement needs identified by the 2005 survey for hospitals' event reporting processes, while finding signs of progress. Optimising the use of surveys to assess the effects of national patient-safety initiatives such as PSQIA will require decreasing within-hospital variations in reporting rates.
- Safety management
- adverse effects
- hospital reporting systems
- patient safety
- surveys
- health policy
- healthcare quality improvement
- health services research
- communication
- health professions education
- crew resource management
- failure modes and effects analysis (FMEA)
- safety culture
Statistics from Altmetric.com
- Safety management
- adverse effects
- hospital reporting systems
- patient safety
- surveys
- health policy
- healthcare quality improvement
- health services research
- communication
- health professions education
- crew resource management
- failure modes and effects analysis (FMEA)
- safety culture
Introduction
In its report, To Err Is Human: Building a Safer Health System, the Institute of Medicine highlighted the importance of adverse event reporting as a foundation for patient safety improvement and identified the fragmented nature of reporting as a significant barrier to achieving improvements.1 In 2005, the RAND Corporation and the Joint Commission collaboratively administered the Adverse Event Reporting System (AERS) survey to a national sample of hospitals, to characterise the extent to which US hospitals have adverse event reporting systems and how they use them.2 RAND fielded the survey a second time in 2009 for a sample of hospitals that responded to the 2005 survey. In this paper, we present the results of that survey and examine changes in hospitals' reporting systems and practices since 2005.
In our work with the 2005 AERS survey results, we found many published sources that identified essential components of an effective reporting system. Key components that emerged were that a hospital's reporting system should be one element of a cohesive patient safety programme that includes identification of errors and occurrences through reporting,3–7 should be able to capture both adverse events and near misses,4 8 9 and should be linked to organisational leaders who can act on reports.4 10 Further, a broad range of staff throughout the hospital should participate in reporting, with confidentiality or anonymity provided for those who report occurrences.4 6 11 12
Using this information, we framed our analysis of the survey data to address four system components that should be in place for hospitals' adverse event reporting systems to be effective2:
a supportive environment that protects the privacy of staff who report occurrences;
broad reporting to the system by a range of types of staff;
timely distribution of summary reports that document reported occurrences for use in action strategies to prevent future adverse events from occurring;
senior-level review and discussion of summary reports by key hospital departments and committees for policy decisions and development of action strategies.
Aiming to compare the results for the 2 years and identify possible changes in performance, we replicated the 2005 analyses for the 2009 survey results, including anchoring performance measures on the four system components identified above.
Study objectives and framework
To reduce adverse events for patients, hospitals need to have effective reporting systems that identify risks and hazards in their systems plus effective performance improvement processes that act on reported information. The survey results reported here address the first of these steps, examining how hospitals' practices for collecting and disseminating the occurrence data needed to inform effective performance improvement have changed between 2005 and 2009. These 2 years of results provide information on national adverse event reporting performance before the patient safety organizations (PSOs) established under the Patient Safety and Quality Improvement Act of 2005 (PSQIA)13 began to have measurable effects on hospital reporting practices. The data from these two surveys can serve as baseline data in future analyses to track trends in improvements for internal reporting practices across the country, and assess the effects of the implementation of PSQIA to support hospitals in improving their reporting processes.
Design and methods
The AERS survey questionnaire was developed and pilot tested by Westat for the US Department of Health and Human Services Quality Interagency Coordination Task Force.14 Questions covered in the survey included whether hospitals collect information on adverse events, what information is collected, who reports occurrences, how their privacy is protected and uses of the data collected.
For both the 2005 and 2009 surveys, our samples were stratified random samples of non-federal US hospitals, and the surveys were completed with the risk manager at each hospital. We used the AERS survey questionnaire for risk managers in the 2005 survey with minor modifications to improve clarity and data completeness. For the 2009 survey, we made additional revisions while retaining all the questions used in the analysis of the 2005 survey data. Changes included adding questions at the end of the survey to gather more detailed data on the nature of the hospitals' patient safety programmes, and adding a question about the importance of having consistent reporting formats. To retain the same survey length as the 2005 survey (compensating for the new questions added), we deleted 12 questions that were open-ended questions, did not perform well or were difficult to interpret.
We used the same mixed-mode (mail/telephone) design for data collection for both surveys, because the method yielded good response rates. We started with a mail survey with two waves of mail follow-ups, followed by a computer-assisted telephone interviewing (CATI) telephone survey for the remaining non-responders. The CATI survey was tested to ensure that the questionnaire items appeared as designed, that the logical flow was correct, that there were appropriate range checks and that the data were being recorded correctly. The survey questions for both years took approximately 25 min to complete.
The 2005 survey was administered to risk managers at 2050 non-federal hospitals, excluding those in southern portions of Louisiana and Mississippi. Hospitals in those areas had been affected by Hurricane Katrina at the time we started data collection, and we could not contact them. The sample was stratified by Joint Commission accreditation status, hospital ownership, and staffed bed size, which also yielded a good representation of teaching, urban/rural, and multi-hospital system status.
In April through September 2009, we administered the second survey to risk managers at a subset of the hospitals that responded to the 2005 survey. A sample of 1200 hospitals was drawn from the 1652 hospital risk managers who completed the 2005 survey, using random selection within the strata established for the 2005 survey. To achieve a representative sample in this survey across strata, a number proportional to the population strata size was selected from each stratum.
For each survey sample, we created non-response weights to realign the sample characteristics with the target population. These weights were used in all the analyses.
We established four indexes as summary measures of hospitals' reporting performance, which addressed the four components of an effective adverse event reporting system. These indexes are: supportive environment, reporting by a broad range of staff, timely distribution of summary reports, and review of reports by key departments and committees. Each index was measured as the sum of the values of dichotomised responses (yes=1; no=0) to survey items included in the index, as described in table 1.
We first conducted cross-sectional analyses comparing the overall results from 2005 to the overall results from 2009, to examine changes over time across all hospitals. We calculated descriptive statistics of the sample characteristics and estimated distributions of hospitals on the performance indexes, and we estimated standard logistic regression modelsi for individual components of the indexes, to assess how hospital characteristics were associated with specific aspects of reporting performance. In addition to reporting odds ratios (ORs), the logistic regression results were also converted to predicted probabilities using recycled predictions.15 We tested for any statistically significant differences between the 2005 and 2009 overall results, and adjusted for repeated observations for hospitals with data from both 2005 and 2009.
These cross-sectional analyses examined average net differences across all hospitals between 2005 and 2009, but did not capture changes in reporting for individual hospitals. To address this question, we performed cohort analyses for the subset of hospitals that responded to both the 2005 and 2009 surveys to examine within-hospital change over time. We compared hospitals' responses for the years 2005 and 2009 for each performance index, as well as for key individual measures that comprise the indexes.
Then we estimated a multinomial logistic regression model for each reporting performance measure, to assess how hospital characteristics were associated with the direction of within-hospital changes in measures from 2005 to 2009. The dependent variable for each model was the direction of change for the measure for each hospital (coded −1, 0 or +1).
All regression models included hospital status, bed size, ownership, teaching, rural location, existence of a patient-safety programme and status as a critical access hospital (CAH). Because CAHs differ from other hospitals by their smaller size and more limited services, they may differ in their adverse-event reporting systems and practices.
The cohort regression models included an additional independent variable that indicated whether a hospital's risk manager had changed between 2005 and 2009. The risk-manager turnover variable was a dichotomous variable that was given the value ‘1’ if the risk manager reported he/she had a nursing degree in only one of the 2 years; otherwise it was given the value ‘0’ (most were nurses). Such a measure does not capture all personnel changes because a person with a particular degree who left the position could be replaced with someone else with the same degree. However, it provides some information about the possible effects of risk manager changes on inconsistency in reporting. We used this measure because the 2009 survey did not include a question about whether the risk manager had also responded to the 2005 survey, which would have provided complete information on this potential effect on survey responses.
Results
We report comparisons for the four reporting performance indexes as well as some examples of changes found in other key measures between 2005 and 2009 (see Farley et al for full results of this analysis16).
We obtained overall survey response rates of 81% for the 2005 survey (N=1652) and 79% for the 2009 survey (N=952). The hospital samples for the 2005 and 2009 surveys had similar characteristics (table 2). The characteristics of these hospitals reflected those of the larger hospital population; as a result, the unweighted and weighted distributions of hospitals in each sample are similar. Therefore, although we used weights in the analyses, they had a minor effect on the results.
In both samples, all but a small percentage of the risk managers reported that their hospitals had a centralised adverse event reporting system (panel A, table 3) (difference non-significant at p=0.78). The type of system they reported having, however, changed from 2005 to 2009. In 2005, only 12.4% of hospitals reported having systems that were only computerised systems, while 71.3% reported having systems based on both computer and paper. By 2009, the percentage of hospitals that had computer-only systems had increased to 23.1% (a 10.7 percentage point increase, significant at p<0.001).
Cross-sectional analyses: reporting performance by hospitals in 2005 and 2009
Hospitals' average index scores showed statistically significant improvements between 2005 and 2009 for the first two indexes (panel B, table 3). Average scores for the supportive environment index increased from 1.08 in 2005 to 1.15 in 2009 (increase significant at p=0.026), and average scores for the types of staff report index increased from 0.95 in 2005 to 1.03 in 2009 (increase significant at p<0.001). Average scores did not change significantly for the indexes on timely distribution of event reports or discussion with key departments and committees (p=0.20 and p=0.42 respectively on the changes).
Cross-sectional analyses: key factors contributing to index scores in 2005 and 2009
Risk managers were asked if hospital policy provided for anonymous reporting or, if reported non-anonymously, for keeping the reporter's identity private (panel C, table 3). The percentage of hospitals that said they always provided anonymous reporting increased from 47.2% in 2005 to 54.0% in 2009 (statistically significant difference at p=0.001). No significant change was found in the percentage of hospitals that always kept the identity of reporters private or that always kept reports in an employee's file; however, the percentage of hospitals that reported they never kept the identity of reporters private decreased from 8.4% in 2005 to 5.3% in 2009 (decrease significant at p=0.002) (not shown in table).
Virtually all the hospitals said in both 2005 and 2009 that they produced summary reports of occurrence data, and more than half said these reports were produced at least monthly. However, in 2005, only 71% (±2.3%) of the risk managers said they distributed these reports within the hospital, and the percentage declined to only 65% (±1.6%) in 2009 (significant difference at p<0.001).
For analysis of dissemination of event reports, we focused on the two components of the index for discussion with key departments and committees—the hospital departments of senior administration, nursing, and medical administration, and the hospital board (or board committee) and the medical executive committee. Although large percentages of the hospitals reported to each of these committees and departments, the percentages reporting to all of them did not approach a desired 100%. Only 25% (±2.2%) of all hospitals reported in 2005 that they distributed adverse event reports to all three of the key departments, and the percentage declined to 21% (±1.4%) in 2009 (significant difference at p=0.03). However, 73% (±2.3%) of hospitals reported in 2005 that adverse events reports were discussed with both the board and medical executive committees, and the percentage increased to 77% (±1.4%) in 2009 (difference significant at p=0.003).
The results of our logistic regression models suggest that the effects of hospital characteristics on dissemination of reports were weaker in 2009 than in 2005 (table 4). In 2005, significant differences in reporting percentages were found across types of ownership, with for-profit higher than both not-for-profit (83% reporting vs 72%, p=0.001) and government owned (83% vs 66%, p=0.046). In addition, hospitals with patient safety programmes were more likely to discuss adverse events with these committees (p=0.017), whereas CAHs (p=0.006), teaching hospitals (p=0.032), and hospitals with computer-only reporting systems (p=0.030) were less likely to do so.
In 2009, the only significant difference based on hospital characteristics was for ownership, in which for-profit hospitals were more likely than not-for-profit hospitals to discuss events with both committees (91% vs 76%, p≤0.001). The other characteristics that were significant in 2005 were not significant in 2009, and most also had point estimates closer to one.
Cohort analysis: identifying performance changes within hospitals
Our examination of within-hospital changes for the cohort of hospitals that responded to both the 2005 and 2009 surveys (N=952) revealed differences between the 2 years in reported status on many measures. The pattern of responses we found is illustrated by two examples of cross-tabulation of responses for two measures: the type of event reporting system and the performance index for a supportive reporting environment. Other measures for which we found similar response patterns included production of event reports, the four performance index scores, and having a patient safety programme.
In table 5, our comparison of reports for 2005 and 2009 by the cohort hospitals shows the percentages of hospitals that reported the same status on each measure for both years, as well as the percentage that reported reduced status in 2009 (declined) and the percentage that reported increased status in 2009 (improvement). For example, of those hospitals that had paper-based reporting systems in 2005, 51.1% said they had paper systems in 2009 (stayed the same) and 48.9% said they had either computer-only or both paper and computer systems (improved). (None could decline from this level of the measure because it is the lowest level.)
We found, however, that 37.0% of the hospitals that had computer-only systems in 2005 declined in 2009 (to either both paper and computer or just paper), and 10.4% of those that had both paper and computer systems in 2005 declined in 2009 (to paper only). Similar patterns were found for changes from 2005 to 2009 in the supportive environment index.
From the results of the estimated multinomial logistic models, we found few relationships between increases or decreases in hospital reporting performance and either hospital characteristics or the variable for risk manager turnover. The only measure significantly associated with a hospital characteristic was a decrease in the policy for anonymous reporting by for-profit hospitals (p<0.001). In addition, the risk manager turnover variable was associated with decreases in only two of the performance measures—type of reporting system (marginally significant at p=0.065) and frequency of physicians reporting events (p=0.039).
Discussion
The samples of hospitals for the 2005 and 2009 AERS surveys were designed to allow us to compare the adverse event reporting systems and processes for two cross-sectional samples of hospitals, and also to analyse within-hospital changes over time for a cohort of hospitals that were in both the samples. Both surveys generated data on baseline reporting practices by hospitals before national initiatives were undertaken to support hospitals in improving reporting processes. Therefore, we did not expect large changes in the measures in the survey.
Indeed, many results from the 2005 and 2009 AERS surveys tended to be similar, but we also found some indications of improved practices. We note, however, that inconsistencies of within-hospital responses found between 2005 and 2009 create uncertainty that must be considered when interpreting these results.
Our cross-sectional comparisons suggested that the following possible trends may have occurred in the internal adverse event reporting systems of US hospitals:
The percentage of hospitals that have computer-only event reporting systems appears to have increased from 2005 to 2009, and the increase appears to have occurred across all types of hospitals, with or without a patient safety programme.
Hospitals appear to have improved on two of the four indexes we created to summarise hospital performance on four key aspects of event reporting processes: supportive environment for reporting and types of staff reporting.
Compared with 2005, differences across hospitals in measures of hospital reporting performance in 2009 appear to be less influenced by hospital characteristics.
For the longitudinal analysis of the cohort of hospitals in both survey samples, we hypothesised that some hospitals would show improvements in their reporting practices and others would not. We did not expect performance levels of many hospitals to decrease between 2005 and 2009, but this is what we found. If the decreases are real, then the differences observed between 2005 and 2009 cross-sectional samples would be interpreted as net effects of some hospitals making improvements while others moved in the opposite direction.
We know from previous work that hospitals are in the relatively early stages of implementing patient safety practices in general, including implementation challenges in which some efforts succeed and others fall short and are discontinued.17 For example, the decommissioning of poorly performing health information systems could result in a shift of a hospital's reporting system from a computerised to a paper-based system. Changes in hospital leadership or risk management staff also may have brought changes in priorities away from some patient safety activities, resulting in declines in reporting performance measures. Thus, some of the apparent declines in performance on the measures from these surveys may be real.
Alternatively, at least some of the observed changes may be related to risk manager turnover, accompanied by differing perspectives of new risk managers about their hospitals' reporting practices, or differences in interpretation of the survey questions. Although the field testing of the original survey found that risk managers were the best positioned personnel in the hospital to provide valid information,14 we knew that they would use some judgement in responding to the questions. Our analysis of this issue was limited by the absence of a measure that captured all the risk manager turnover that occurred from 2005 to 2009. We have anecdotal information from our survey data collection staff that as many as half of the risk managers that responded to the 2009 survey may have been new since we conducted the 2005 survey. Thus, differences in perspectives between old and new risk managers were likely to have contributed to within-hospital difference in measures between 2005 and 2009.
AHRQ is using this survey to track the effects of the PSO programme over time on hospitals' event reporting practices, with the 2005 and 2009 surveys providing baseline data. Therefore, it will be important to further examine the issue of within-hospital inconsistencies in reporting performance before proceeding with another survey. The reasons for the apparent declines in performance on the reporting process measures need to be explored.
Using case study methods, this information could be gathered from the risk managers and other management staff at selected hospitals in the sample whose risk managers changed between 2005 and 2009. Such a study should assess how most of the apparent decline in reporting process performance was real, as opposed to being differences in staff definitions or perceptions of those processes. The study results would guide revisions to the questionnaire for future surveys, to increase the accuracy and consistency of the survey data collected.
Conclusions
Findings from these two baseline hospital adverse event reporting surveys point to several needed improvements in hospitals' processes for reporting and acting upon identified occurrences. PSQIA protection for hospitals reporting to PSOs could encourage such reporting by alleviating hospitals' concerns about liability exposure, and could stimulate improvements in hospitals' internal reporting systems. Other mechanisms should also be sought to encourage hospitals to strengthen their reporting systems. Further, it will be important to refine future AERS survey methods to manage issues of within-hospital variations in reporting, so that the survey can be used effectively to track performance changes over time and assess the possible effects of the PSQIA and related actions to encourage hospitals to strengthen their internal reporting processes.
Acknowledgments
We thank the risk managers at the hospitals in our sample for their willing participation in the 2005 and 2009 surveys. We also thank the staff at RAND Survey Research Group for administering the survey data collection efforts and our RAND colleague, Scott Ashwood, for his data management and programming support for the analysis of the survey data.
References
Footnotes
Funding This study was conducted with support from the Agency for Healthcare Research and Quality, US Department of Health and Human Services. Grant Number 290-02-0010 (contract).
Competing interests None.
Patient consent This study did not involve patients. It was a survey of risk managers at US hospitals.
Ethics approval RAND Human Subjects Protection Committee.
Provenance and peer review Not commissioned; externally peer reviewed.
↵i Logistic regression models are used when the dependent variable in a model is a dichotomous (eg, yes/no) or discrete variable (eg, race categories), which does not meet the assumption of normal distribution required for use of standard linear regression models.