Intended for healthcare professionals

Editorials

Effects of quality improvement collaboratives

BMJ 2008; 336 doi: https://doi.org/10.1136/bmj.a216 (Published 26 June 2008) Cite this as: BMJ 2008;336:1448
  1. Peter K Lindenauer, director and associate professor of medicine
  1. 1Center for Quality and Safety Research, Baystate Medical Center and Tufts University School of Medicine, Springfield MA, 01199, USA
  1. peter.lindenauer{at}bhs.org

Are difficult to measure using traditional biomedical research methods

In the linked study, Schouten and colleagues report a systematic review of the effectiveness of quality improvement collaboratives in improving the quality of care. They conclude that the evidence supporting these collaboratives is positive but limited and their effects are difficult to predict.1

Despite limited evidence, the quality improvement collaborative is one of the most popular methods for organising improvement efforts at hospitals and ambulatory practices worldwide. Quality improvement collaboratives in health care date back to the mid-1980s, and some of the earliest and most successful examples include the Northern New England Cardiovascular Disease Study Group, the US Veterans’ Affairs National Surgical Quality Improvement Program, and the Vermont Oxford Network. These ongoing initiatives have improved care and saved many lives at participating hospitals.2 3 4

In the 1990s, the Institute for Healthcare Improvement, the pre-eminent quality improvement organisation in the United States, popularised a quality improvement model they called the breakthrough series.5 Whereas earlier quality improvement collaboratives were limited to a single domain (such as cardiac surgery), the breakthrough series method has been applied to a wide range of topics, from improving access in primary care to reducing adverse drug events among patients in hospital.

Quality improvement collaboratives bring together quality improvement teams from multiple sites across a region or country to focus on a common problem. Over one or two years (or many years in the earliest collaboratives) experts in clinical and performance improvement provide the group with periodic instruction and encourage the teams to share lessons learnt and best practices. The model has taken hold largely on its face validity—the idea that improvement teams are likely to be more effective when working together rather than in isolation—and it has been replicated many times across the US and Europe.

Several years ago our hospital joined a quality improvement collaborative to reduce the occurrence of postoperative infections in patients undergoing major surgery. Together with more than 50 hospitals throughout the US and its territories, we identified several specific quality measures and targets; for example, we sought to ensure that all patients received prophylactic antibiotics within one hour of the opening surgical incision.

At each of several “learning sessions” we received instruction from national leaders in perioperative care and training from quality improvement experts in how to apply the “plan-do-study-act” quality improvement paradigm to surgical care. After the initial meeting, each hospital presented their progress, achievements, and lessons learnt. How to apply these lessons at home was then discussed.

At the end of the 18 month project we had made dramatic improvements in several key process of care measures, but little headway in others, and our postoperative infection rate had not improved. Some hospitals across the collaborative struggled to make even small improvements, whereas others described impressive gains and substantial reductions in infection.

Unfortunately, neither the quality improvement collaborative for surgical infection prevention nor hundreds of others that have been carried out over the past two decades are included in the systematic review by Schouten and colleagues. This cannot be blamed on the authors, who scanned more than 1000 journal abstracts to find 175 articles worth reviewing in detail. Of the 72 published studies that reported on the outcomes or effectiveness of a quality improvement collaborative, 60 (82%) used an uncontrolled study design, generally relying on a simple before and after approach that could not account for secular trends; relied on self report rather than third party chart review; and suffered from generally poor quality data management procedures. The remaining 12 reports represented nine studies, including two randomised controlled trials; seven showed at least some positive effects on process or outcome measures, while two were entirely negative. Even in this highly restricted group, most studies had methodological weaknesses that would be considered problematic outside of the field of quality improvement research. Of the two randomised controlled trials, one showed no benefit, whereas the other showed improvement in two process of care measures but not in outcomes.

Although the review is original it does have several important limitations. Firstly, it is debatable whether the nine studies included represent the global experience with quality improvement collaboratives, and thus whether the findings can be extrapolated to future collaboratives. Secondly, the small number of high quality studies makes it impossible to evaluate which characteristics of these collaboratives are associated with success. For example, the kinds of clinical conditions that are most suited to the approach, the attributes of a successful faculty, the ideal mix of team members, the number of sessions needed and how they should be structured, and the time period over which the quality improvement collaborative should take place.6

The third concern is whether aggregating the findings of a heterogeneous group of studies on quality improvement collaboratives makes much sense. To state that quality improvement collaboratives are modestly beneficial seems analogous to saying that, in general terms, drugs have beneficial effects on disease. Although this may be true, it hides the fact that some drugs improve outcomes for patients with certain conditions (for example, aspirin for secondary prevention of coronary artery disease) more than they do for others (for example, cholinesterase inhibitors for Alzheimer’s dementia).

A more fundamental question is whether the methods used in traditional biomedical research are sufficient to evaluate quality improvement collaboratives. Undoubtedly, randomised controlled trials are the optimal approach to test the efficacy of drugs. But, unlike most pharmacological trials in which a study coordinator ensures that patients are treated according to a strict protocol, this is not usually the case for quality improvement initiatives, which take place in a less controlled environment. Research into quality improvement that reports only the mean improvement in participants and controls misses an opportunity to explore important contextual factors that might have explained why two hospitals can have such different experiences when participating in the same quality improvement collaborative.

Future research should focus on the behaviours and actions of the participants themselves, such as how the executive sponsors tried to ensure that the team was successful, what role the doctor and nurse champions played in winning the support of their colleagues, and how information technology was used for the benefit of the project.7 8 While lip service has been paid to the need for these kinds of studies, they remain few and far between.9 10 11

Footnotes

References

View Abstract