Article Text

Download PDFPDF

Data feedback efforts in quality improvement: lessons learned from US hospitals
  1. E H Bradley1,
  2. E S Holmboe2,
  3. J A Mattera3,
  4. S A Roumanis3,
  5. M J Radford4,5,
  6. H M Krumholz1,3,5
  1. 1Section of Health Policy and Administration, Department of Epidemiology and Public Health, Yale University School of Medicine, New Haven, CT, USA
  2. 2Department of Medicine, Yale University School of Medicine, New Haven, CT, USA
  3. 3Yale-New Haven Hospital, Center for Outcomes Research and Evaluation, New Haven, CT, USA
  4. 4Yale-New Haven Health, Center for Outcomes Research and Evaluation, New Haven, CT, USA
  5. 5Section of Cardiovascular Medicine, Department of Medicine, Yale University School of Medicine, New Haven, CT, USA
  1. Correspondence to:
 Dr H Krumholz
 Yale University School of Medicine, 333 Cedar Street, PO Box 208088, New Haven, CT 06520-8088, USA; maria.johnsonyale.edu

Abstract

Background: Data feedback is a fundamental component of quality improvement efforts, but previous studies provide mixed results on its effectiveness. This study illustrates the diversity of hospital based efforts at data feedback and highlights successful strategies and common pitfalls in designing and implementing data feedback to support performance improvement.

Methods: Open ended interviews with 45 clinical and administrative staff in eight US hospitals in 2000 concerning their perceptions about the effectiveness of data feedback in supporting performance improvement efforts were analysed. The hospitals were chosen to represent a range of sizes, geographical regions, and β blocker improvement rates over a 3 year period. Data were organized and analyzed in NUD-IST 4 using the constant comparative method of qualitative data analysis.

Results: Although the data feedback efforts at the hospitals were diverse, the interviews suggested that seven key themes may be important: (1) data must be perceived by physicians as valid to motivate change; (2) it takes time to develop the credibility of data within a hospital; (3) the source and timeliness of data are critical to perceived validity; (4) benchmarking improves the meaningfulness of data feedback; (5) physician leaders can enhance the effectiveness of data feedback; (6) data feedback that profiles an individual physician’s practices can be effective but may be perceived as punitive; (7) data feedback must persist to sustain improved performance. Embedded in several themes was the view that the effectiveness of data feedback depends not only on the quality and timeliness of the data, but also on the organizational context in which such efforts are implemented.

Conclusions: Data feedback is a complex and textured concept. Data feedback strategies that might be most effective are suggested, as well as potential pitfalls in using data to promote performance improvement.

  • quality improvement
  • data feedback
  • acute myocardial infarction
  • performance monitoring

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Data feedback, the process of monitoring performance in practice, is a central component of quality management.1,2 Experts argue that data feedback is fundamental to improving clinical practice,3–6 and national initiatives in several countries to improve quality of care now include monitoring and dissemination of performance data.7–12 In a previous qualitative study of quality improvement efforts to increase β blocker use after acute myocardial infarction (AMI)13 we developed a taxonomy for performance improvement efforts which included six broad domains: organizational goals, administrative support, support among clinicians, design and implementation of improvement initiatives, use of data, and contextual factors. In the current study we explore one key domain of this taxonomy—the use of data. Specifically, we describe the common themes in the design and implementation of diverse data feedback efforts and highlight what participants believed were successful strategies and common pitfalls in designing and implementing data feedback to support performance improvement efforts.

Numerous studies, including several randomized controlled trials, have assessed the efficacy of specific data feedback initiatives implemented at the institutional level. However, results of the studies are mixed. Reviews14–20 reveal that some studies indicate data feedback can improve practice, while other studies indicate little or no effect. In addition, recent qualitative studies and reviews of national efforts in the UK9,10 and the US8,11,12 to disseminate performance data have identified some, but limited, success of such efforts in enhancing quality of care. These diverse findings in the literature highlight our relative ignorance about how and why such efforts might influence quality of care. Understanding how different approaches to data feedback might affect physicians’ practices is paramount to designing and implementing successful data feedback efforts.

The objective in this analysis was to identify key themes about effective approaches as well as pitfalls to avoid in using data feedback to support performance improvement efforts. We studied efforts in eight hospitals that varied substantially in their performance on the quality indicator of β blocker use after AMI. The descriptive objective of this analysis is well suited to the qualitative study design and in-depth interviews performed.

We chose to study β blocker use after AMI because its efficacy and effectiveness are well established21–26; however, studies continue to demonstrate its underutilization.21,27–30 Furthermore, the American College of Cardiology (ACC) and American Heart Association (AHA) have endorsed the use of β blockers after AMI,31 and the Centers for Medicare and Medicaid Services (CMS), the National Commission on Quality Assurance (NCQA), the National Quality Forum (NQF), and the Joint Commission on Accreditation of Healthcare Organizations (JCAHO) now employ β blocker use after AMI as a quality indicator.

METHODS

Study design and sample

This qualitative study was based on open ended interviews with clinical and administrative staff during eight hospital site visits from March to June 2000, as previously described.13 Study hospitals were selected to represent a range of sizes, geographical regions, and β blocker improvement rates over a 3 year period. Beta blocker use rates at discharge were determined using data from the largest ongoing registry of AMIs and related care available in the US, the National Registry of Myocardial Infarction (NRMI).32 Changes in rates of β blocker use were calculated as the rate of β blocker use at discharge during the follow up period (April 1998 to September 1999) minus the rate of β blocker use at discharge during a baseline period (October 1996 to March 1998) at each hospital. The timing of the follow up and baseline periods was selected because they coincided with substantial national attention on improving the use of β blockers for patients with AMI.21,33

Sites were chosen using purposeful sampling consistent with standard qualitative sampling methodology.34–36 We anticipated that data feedback efforts might vary by hospitals’ improvement rates in β blocker use, their size, and their geographical location, so a hospital sample was sought that reflected diversity in these characteristics. To ensure that the study hospitals reflected a range of β blocker use, we arrayed all eligible hospitals by deciles according to their changes in β blocker use rates. We then randomly selected hospitals from the lowest two deciles (representing declines in β blocker use rates ranging from −22 and −6 percentage points), the middle two deciles (representing increases in β blocker use rates ranging from 5 to 7 percentage points), and the highest two deciles (representing increases in β blocker use rates from 19 to 35 percentage points). In two cases randomly selected hospitals did not meet selection criteria; in both cases the randomly selected hospital was located in California which was already represented by two hospitals in the sample. We therefore proceeded to the next randomly selected hospitals from the same deciles. With this selection process we were able to achieve a sample that represented diversity in key aspects of the organization: β blocker use after AMI, size, and geographical region.

Additional hospitals were selected and visited until no new concepts were identified by the additional interviews. This occurred after the eighth hospital site visit and 45 interviews. The research team was blinded to hospital β blocker use rates until the completion of the data collection and coding. The characteristics and discharge β blocker use rates of the study hospitals are shown in table 1.

Table 1

Study hospitals

Interview schedule

The investigators conducted open ended interviews in person with physician, nursing, quality management, and administrative staff identified by the director of quality assurance or quality management as key staff involved with improving the care of patients with AMI. We began the data monitoring and feedback part of the interview with a grand tour question, “Would you describe your data feedback efforts? How have you used data to monitor and improve performance in AMI?” In addition, probes were used if the participants did not spontaneously describe how the data were collected and disseminated and the degree to which the data were effective in improving care.

Between four and seven individuals were interviewed at each hospital for a total of 45 participants (14 medical staff, 15 nursing staff, 11 staff from quality management or quality assurance departments, and five senior administrators). As is common with in-depth interviewing,37 interviews were about 90 minutes in length. They were conducted by at least two individuals who were master level or doctoral level prepared and who represented diverse backgrounds including cardiology, internal medicine, health services research, public health, and nursing. The use of more than one interviewer with different backgrounds and training has been suggested as a means of enhancing the breadth of probes asked and of increasing the validity of data collected.37,38 All researchers had substantial background and expertise in quality improvement. This background may have biased the team toward particular aspects of quality improvement or may have biased participants in describing expected rather than authentic responses. However, at the same time, the interviewers’ understanding of quality improvement issues enhanced the technical understanding of specific interventions described by the participants, and every effort was made to ensure confidentiality and to make the participants feel comfortable revealing the truth of their experiences. All interviews were audiotaped and transcribed by independent professional transcriptionists. The Human Investigation Committee of the Yale University School of Medicine approved all research procedures.

Data analysis

Interview data regarding data monitoring and feedback were coded and analyzed using the constant comparative method of qualitative data analysis.35,39 Although data from the interviews were previously analyzed to identify the various domains of performance improvement efforts,13 the present analysis adds to our understanding of the diversity of data monitoring and feedback in particular, as well as characteristics of such efforts that may influence their effectiveness.

The code structure specific to data monitoring and feedback was developed iteratively, based on initial review of the first two transcripts and then further in steps with each successive set of interviews. Initially, line by line review and coding of the first two transcripts were accomplished in joint sessions with four researchers (EB, EH, SR, JM) with different backgrounds. Together, the researchers discussed the content and meaning of the interview data, coding distinct concepts as they emerged. Based on review of the first transcripts, an initial code structure was developed. Successive interviews were reviewed and coded after each one was completed, by at least two researchers per transcript. During this process the code structure was expanded and refined. As new codes were added to the code structure, transcripts were re-reviewed by a subset of researchers (EB, EH) to ensure later codes were applied consistently to earlier transcripts. During its development the code structure was reviewed three times by the full research team for logic and breadth. Discrepancies in coding and interpretation were resolved through group discussion and negotiation. Coded data were entered in NUD-IST 4 (Sage Publications Software, Thousand Oaks, CA) to assist in reporting recurrent themes, common links among similar concepts, and quotations to illustrate the dimensions.

Several research techniques were used to ensure that data analysis was systematic and verifiable, as recommended by experts in qualitative research.36,39–42 These included consistent use of open ended questions, audiotaping and independent professional preparation of the transcripts, coding and analysis of the data using an explicit coding structure developed in the study, consideration and discussion of discrepant interpretations, and the creation of an analysis audit trail to document analytical decisions.

RESULTS

Participants reported a range of experiences in conducting data monitoring and feedback for β blocker use after AMI. From these descriptions, several common themes emerged (box 1). Together, the themes illustrate common perceptions of participants who had been involved in data feedback efforts at their hospitals with regard to effective approaches and potential pitfalls in designing and implementing data feedback efforts. “Higher performing hospitals” were defined as those with an improvement in β blocker use of at least 10 percentage points over time and follow up rates of β blocker use of 65% or more (sites H4, H7, and H8 in table 1). Others are described as “lower performance hospitals”.

Box 1 Common themes about what makes data feedback effective in the hospital setting

  • Theme 1: Data must be perceived by physicians as valid to motivate change

  • Theme 2: It takes time to develop the credibility of data within a hospital

  • Theme 3: The source and timeliness of data are critical to perceived validity

  • Theme 4: Benchmarking improves the meaningfulness of the data feedback

  • Theme 5: Physician leaders can enhance the effectiveness of data feedback

  • Theme 6: Data feedback that profiles an individual physician’s practices can be effective but may be perceived as punitive

  • Theme 7: Data feedback must persist to sustain improved performance

Theme 1: Data must be perceived by physicians as valid to motivate change

Participants at every site emphasized the importance of having valid data—valid as perceived by physicians—in order to influence physician practices and documentation of those practices. If the hospital was able to monitor and report data on β blocker use that was credible to physicians, participants typically were confident that improvements would ensue. For instance, participants at hospitals with higher performance and improvement in β blocker use said:

Physicians are scientists by nature. You don’t need to push and pull. Just good, validated data...and anyone will respond.” (Vice President, Administration, H4)

Again, I think it is the data [changes happened because of the data]. If you have honest data, people can’t argue with it. If you have proof. It wasn’t like we were making up the numbers.” (Care Coordinator, higher performing hospital, H8)

You learn pretty quick that if you want to create change, you have to have the data to back up why you think change is possible or why we need to look at the system . . . [we] collect data to use it as a tool or a change agent.” (Quality Improvement Director, H7)

Participants at several hospitals described the effects of having data that were not credible. Some reported that inaccurate data not only stemmed changes but also hurt the improvement process generally, casting doubts on the quality of the quality improvement efforts. One participant described:

The most dangerous kind of data is if it is not collected right. The so called ‘garbage-in, garbage-out’. It’s garbage if the data comes out as inaccurate.” (Quality Improvement Director, H7)

Theme 2: It takes time to develop the credibility of data within a hospital

Even in the hospitals with higher performance, many participants reported that it took time to develop credible data and for physicians to believe the data; several participants reported strategies they used to increase the credibility of the data. In one hospital the credibility was achieved by the quality improvement nurses reviewing charts jointly with the most influential cardiologist for some time, as described here:

Very kindly and with respect, we would say, ‘would you like to sit down with me and review these 10 charts?’ And then you’d go through the process... and you’d say, ‘I just wrote down what you wrote.’ And then all of a sudden, [the physician said] ‘OK, OK’. So it took a while and some physicians it took a while longer, but eventually they realized ... and all of a sudden their documentation started reflecting a little more of the protocols ... It’s been a process. A slow process.” (Quality Improvement Director, H4)

Another strategy was to research any discrepancy in the data as suggested by physicians and to do this as quickly as possible. In one higher performing hospital the Chief of Cardiology, who was responsible for presenting the data feedback to the internists and cardiologists, described how building data credibility took time.

If we have any doubts about the data, we’ll go back until we’re sure it’s clean and think we have established credibility that, if we say this is how we measured it, the physicians really don’t go after us anymore grilling about, ‘Now where did that come from?’ And they did at first.” (Chief of Cardiology, H8)

Theme 3: The source and timeliness of data are critical to perceived validity

The sources of data—including who abstracts, analyzes, reports, and presents the data—were perceived to be central to their credibility and their potential influence on physician practice. Data from the external registry were reported to be particularly credible. Illustrating the credibility of this source, a participant said:

Physicians really value the [external] data reporting systems because they read about that and so that gives a high level of credibility — their knowing all of the definitions and the way that is collected or the criteria [they use]. And sometimes...if we’re pulling the data off our own information systems, they’ll question it, ‘How valid is that data?’ But somehow if it comes from the national registry, ‘Now, that’s valid.’ There’s a magical potion to it.” (Quality Management Director, H7)

However, the quality of the abstraction was also noted by several participants to be critical to its perceived validity. Participants described problems in data credibility due to the poor quality of abstraction and the poor quality of its presentation.

We were getting different data that we couldn’t believe and it turns out that had to do with data collection. She was very good at it but she didn’t know all the issues.” (Chief of Medicine, H1)

Many times the people who are presenting it are not clinically competent and have already made a few verbal faux pas prior to presenting the data and so now, what do people say when they see the data? ‘This isn’t right.’” (Quality Management Director, H3)

Similarly, a strong view reported by participants was that the timeliness of data is central to its credibility. The ability to collect real time data and feed it back to physicians was reported to be particularly effective in changing practices.

I guess I caught on pretty quickly to what information [the cardiologists] wanted and then started collecting my data to be more real time, so they were getting information that was no more than 3–6 months old, rather than being 2 years old.” (Care Coordinator, H5)

Part of the problem is our data has always been incomplete. We are always so far behind.” (Quality Management Director, H3)

Some frustrations are that we have the statewide data source and it’s always 2 years behind.” (Quality Management Director, H7)

Theme 4: Benchmarking improves the meaningfulness of the data feedback

Comparisons of a hospital’s rate of β blocker use with those of similar institutions were described as catalysts for change in many hospitals, especially when the benchmark hospitals were in the same market or the same multi-hospital system. As described by the Chief of Medicine at a higher performing hospital:

When the quality management staff point out that our numbers are below the State average, which they used to be, people begin to think, ‘I thought we were supposed to be a pretty good place. We ought to be better than other hospitals in the state. Why are we lower than these other people? What do they know that we don’t know?’” (Chief of Medicine, H8)

Participants commonly reported that data themselves were not able to produce change, but that comparing data over time within their hospital as well as comparing with other hospitals could provide the impetus for continued improvement. One participant stated:

The mere collection of data is meaningless unless you can utilize it, but if we don’t utilize it and compare ourselves to ourselves on a quarterly basis, on a yearly basis, it’s meaningless.” (Chief of Medicine, H7)

In addition, participants who had experienced public reporting of performance data described the strong impact of benchmarked data in the public domain for consumers and payers to compare hospitals on quality indicators. This was viewed to be very effective in forcing changes; however, the data compared in the public domain (for one hospital, in the city newspaper comparing all hospitals in the state) were typically length of stay and costs. Beta blocker rates were not described as being reported in this way.

Theme 5: Physician leaders can enhance the effectiveness of data feedback

Data feedback efforts that were described as most successful used respected physician leaders to review and present the data to other physicians. Participants often described these physicians as “champions” and emphasized their willingness to approach their peers about the data on quality indicators such as β blocker use after AMI. The quality manager in a higher performing hospital described the integration of a physician leader in their data feedback program, saying:

So, for example, [we] sit down with the Medical Director and go over the data; he has an opportunity to ask questions, identify holes in the data, try to predict what questions might be asked at a cardiology subsection. The data have to be scrubbed or cleaned up, then brought to the larger group for discussion. We pick out a couple of physician champions to look at the data, and then once they have a comfort level, they oftentimes will even be able to speak to the data when it comes to the table at a subsection meeting.” (Quality Improvement Manager, H7)

In contrast, in one lower performing hospital, difficulties getting physicians to present the data feedback and discuss it openly with their peers were apparent, as illustrated by this statement from the quality improvement manager:

When I first suggested this go back to the physician, you get into some political issues. And then, [the doctors] weren’t real comfortable at first giving this [the data] back [to their peers]...that was a real learning curve.” (Quality Improvement Manager, H2)

Theme 6: Data feedback that profiles an individual physician’s practices can be effective but may be perceived as punitive

Individual cardiologists at several hospitals which had physician profiling described that aspect of data feedback as helpful. For instance, one said:

What I think has worked best in this process is just having somebody like [the care coordinator] who’s got the data that she can feed back to me ... just knowing I maybe only sent 20% of my acute MIs home on beta-blocker at least sensitizes me to the fact.” (Cardiologist, H5)

Similarly, in another hospital a physician also supported physician-specific feedback if the data were “really good”.

It would be nice to print out to each physician what percentage of their MIs get discharged on these medications because everybody says, ‘We know all this.’ Knowledge is not the same as compliance. I think if we could have really good data, we could have greater impact.” (Cardiologist, H3)

Despite the recognized potential of physician-specific data feedback, the fear of being overly punitive with such data was described by a few participants. In one of the higher performing hospitals, however, the administration described their conscious efforts not to make data feedback a punitive tool. For instance, staff at this hospital said:

It’s [data feedback] not policing. It’s coaching. It’s encouraging. It’s lots of smiles. (Vice President, Administration) And we never really pick out the person. I think the data [are] always presented as institutional data, and it has never been a punitive kind of thing. We don’t point fingers and say, ‘You’re the guy who’s not giving [β-blockers] when you have an MI patient’.” (Quality Improvement Director, H4).

In another hospital which did not employ physician profiling, the quality management director highlighted the importance of organizational culture in deciding whether to produce physician-specific data feedback.

I think now people would be receptive to the data by physician. I think 8–10 years ago, they would not [have been] but [now] they know that we’re not punitive about it. We’re just trying to be educational. The culture is such that now doctors realize we’re doing it for improvement purposes ... not to take away someone’s privileges or credentials.” (Quality Management Director, H3)

Believing that physician-specific data feedback might retard efforts to improve, such profiling was dropped by one hospital due to its lack of effectiveness in changing physicians’ practices, as perceived by the Chief of Cardiology. He said:

I think we were a little more punitive 5 years ago [when we did physician profiling] than we are now, quite honestly. And I think it’s been a good move to get away from that—just work on a more group level than an individual level.” (Chief of Cardiology, H5)

Theme 7: Data feedback must persist to sustain improved performance

In addition to identifying data feedback as an important motivator of change, participants strongly believed that continued data monitoring and feedback was necessary even after improvement had taken place, in order to sustain the gains achieved. The experience of increasing β blocker use at first and then relapsing to earlier lower β blocker rates was not uncommon. Most participants indicated that vigilance in monitoring progress with continued data feedback would always be needed. For example, a Cardiac Care Nurse Specialist at a hospital that had had a substantial decrease in β blocker use said:

When we first looked at [β blocker use], right away afterwards it seemed like [β blocker use] was better, and then it kind of waned again. So it looks like we need to [provide data feedback on β blocker rates] again, is basically what it’s saying. You can’t just do it once and expect it to be fine.” (Cardiac Care Nurse Specialist, H2)

DISCUSSION

Support for data feedback as a central component of performance improvement efforts was widespread among the hospital administrative and clinical staff in this study, yet the design and implementation of data feedback efforts were highly variable. This study suggests that data feedback is a complex and textured concept and helps identify hypotheses about approaches to data feedback that might be more effective in changing physician practices and the potential pitfalls in using data to promote performance improvement.

The themes revealed the importance of perceived validity and meaningfulness of the data used in performance reporting. Data feedback efforts were most effective when physicians perceived that the data were credible, and this data credibility often took time to develop in a hospital. Data credibility was derived from the source of the data, the reliability and training of data abstractors, the timeliness of the data fed back, and the skills of and respect for those presenting the data feedback. Data meaningfulness was enhanced substantially by benchmarking against practices in similar institutions in the region or marketplace as well as to one’s own institution over time. Both characteristics of the data—perceived validity and meaningfulness—were reported to be central to the potency of data feedback efforts in catalyzing changes in physician practices.

In addition, the findings suggest that the effectiveness of data feedback might be intertwined with the organizational context, including the degree of physician leadership and the organizational culture. What might be an effective approach in one hospital might not be effective in another. Rather, the effectiveness is in part a function of the degree to which physician leaders are involved in the development and presentation of data as well as the degree to which the organizational culture promotes non-punitive discussion of physician practices and quality improvement. This was particularly noted in the area of physician profiling, which was viewed by some participants as very effective and by others as not at all effective in improving performance, depending on how the physician-specific data were used and understood.

Lastly, several participants perceived that continuous monitoring and data feedback would be needed to sustain improvements over time. In some hospitals, participants felt that data feedback would need to be not a one time intervention but an embedded part of the process of caring for patients with AMI. Although the analysis is limited by small numbers, a few hospitals had experienced recidivism in β blocker use which they perceived coincided with a relaxation of monitoring and feedback. In the higher performing hospitals, participants described the data feedback as ongoing despite high performance.

Key messages

  • Data feedback is a central component of quality management.

  • The use of data feedback to improve the rate of β blocker use following acute myocardial infarction was evaluated from open ended interviews with 45 clinical and administrative staff in eight hospitals.

  • Seven key themes emerged from the study:

    • – Data must be perceived by physicians as valid to motivate change

    • – It takes time to develop the credibility of data within a hospital

    • – The source and timeliness of data are critical to perceived validity

    • – Benchmarking improves the meaningfulness of the data feedback

    • – Physician leaders can enhance the effectiveness of data feedback

    • – Data feedback that profiles an individual physician’s practices can be effective but may be perceived as punitive

    • – Data feedback must persist to sustain improved performance

  • The effectiveness of data feedback depends not only on the quality and timeliness of the data but also on the organizational context in which such efforts are implemented.

Previous studies have identified data validity8,12,18,20 and timeliness9,19 as important components of data feedback. Our study highlights the perceived validity as central and suggests strategies that might enhance that perceived validity over time. In addition, our study reveals the reported importance of making the data meaningful through benchmarking with other organizations and one’s own organization over time. Lastly, our study argues for a stronger appreciation of the role of the organizational context in designing and implementing data feedback efforts. The finding highlights the complexity of evaluating data feedback interventions because their impact may be modified by the organizational context. Based on the views of participants in this study, physician leadership and the broader organizational culture of improvement may determine in part which data feedback strategies will be most effective in individual hospitals.

The findings of the study should be interpreted in light of the study design. Firstly, the study included a selected sample of 45 participants in eight hospitals and, while the hospitals ranged in geographical region, size, and performance, additional themes might have been apparent in other samples of hospitals. Secondly, qualitative data collection and analysis are by nature subjective; however, we used several techniques to limit the subjectivity and bias that can compromise such studies.36,39,40 All interviews were tape recorded; in addition, standardized procedures were used for coding and analyzing the data, and the coding was performed by a group of individuals with diverse backgrounds. Thirdly, we examined only data feedback directed at β blocker use, and experiences with data feedback directed at other clinical processes may differ. Finally, due to the qualitative nature of the study, our findings do not constitute statistical inferences. Despite these limitations, we believe articulating the key themes and hypotheses from this study can be an important step in developing the needed evidence base to understand the effectiveness of different data feedback efforts used to support performance improvement.

Acknowledgments

The authors acknowledge the research assistance of Kristin Mattocks and Tashonna Webster and the manuscript preparation of Maria Johnson, as well as the participating hospital sites and staff.

REFERENCES

Footnotes

  • This research was supported by the Agency for Healthcare Research and Quality, R01 HS10407. Dr Bradley is supported by the Donaghue Medical Research Foundation (#02-102) and the Claude D Pepper Older Americans Independence Center at Yale (#P30AG21342).

Linked Articles

  • Quality lines
    BMJ Publishing Group Ltd