Article Text

Download PDFPDF

QI research and evidence based health care
Strengthening the contribution of quality improvement research to evidence based health care
Free
  1. G R Baker
  1. Correspondence to:
 G Ross Baker PhD
 Department of Health Policy, Management and Evaluation, University of Toronto, Toronto, Ontario, Canada M5T 3M6; ross.baker{at}utoronto.ca

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Better reporting of quality improvement efforts could assist in the design of effectiveness research

The suggestions by Davidoff and Batalden1 for strengthening reports on quality improvement offer useful guidance for those wishing to publish such work. Their rationale for providing this guidance stems from their perception that the failure to provide better information about local improvement efforts slows the spread of successful changes, limits the scrutiny of quality improvement work, and reduces the incentives to participate in such efforts. They are not the first to lament the variable quality of such reports on improvement. Yet Davidoff and Batalden focus primarily on the impact of enhancements in the reporting of quality improvement efforts on quality improvement practice. These efforts will also have an important impact on quality improvement research. And the benefits between research and practice are likely to be synergistic.

GOALS OF QUALITY IMPROVEMENT RESEARCH

More thorough reporting of quality improvement is a first step toward greater academic respectability for these efforts. But a fuller dialogue about these methods and their epistemology is also critical. The goals of quality improvement practice are to enhance performance by setting aims, examining processes of care, testing changes in these processes, and implementing those changes that improve results.2,3 Nolan has characterized quality improvement as “pragmatic science”, referring to its emphasis on using knowledge about how care is delivered to identify improvements and build better systems though the accretion of small changes.4 While the selected changes derive from relevant research as well as the knowledge that clinicians gain in treating their patients, there is a level of discomfort created among those who view this approach from the standards of evidence based health care.5,6 However, valuing local knowledge does not negate the importance of research because quality improvement goals differ. Quality improvement efforts use evidence to identify changes and focus on implementing effective practices, not assessing whether they are effective. For example, a typical quality improvement project aimed at reducing postoperative infections does not assess which antibiotic is needed by the patient undergoing surgery. Rather, it tests ways of delivering the right antibiotic in a timely fashion using guidelines based on research assessing the efficacy of different antibiotics. Quality improvement research relies heavily on simple pre-post designs, often in single sites without controls. But such research offers an important starting point for understanding workable approaches for implementing improvements. The goal of quality improvement research is weighted toward identifying how to implement effective changes, not assessing the efficacy of those changes.

Given these differences, it is not surprising that quality improvement research and evidence based health care are usually seen as distinctive—sometime opposing—strategies, despite their shared goal of improving outcomes. However, a growing synergy between these two approaches and the two camps of researchers and practitioners is possible, as are opportunities for greater collaboration. Take, for example, the growing number of improvement collaboratives involving teams from one or more organizations and focused on improving care in specific areas. These collaboratives have been an increasingly useful technique for testing the implementation of evidence based care. Typically, these efforts begin with a careful review of research and guidelines and a discussion with a panel of experts in relevant clinical areas. This review aims to generate ideas for teams to implement. The American “100,000 Lives Campaign” that has focused on six “bundles” of interventions to improve patient safety has taken this idea to a new level. Working with experts, the Institute for Healthcare Improvement identified groups of evidence based practices that have been shown to yield improved clinical outcomes. Thus, clinical teams who sign on to improve care are given an arsenal of research based practices to test in their hospitals. Providing these bundles of evidence based practices accelerates work for teams who can now concentrate on implementing changes to reduce ventilator associated pneumonia or catheter related bloodstream infections, to name only two areas of the 100,000 Lives Campaign. Quality improvement efforts that might stall in the deliberation of what changes to make benefit from the review of evidence on these topics.

POTENTIAL CONTRIBUTIONS OF QUALITY IMPROVEMENT RESEARCH

There are several other areas where quality improvement research could be more closely associated with assessments of effective practice. Firstly, the focus of improvement research on identifying effective means for implementing new practices could help in the design of randomized controlled trials and other research assessing effective practice. Randomized controlled trials are widely accepted as the most reliable method for assessing effectiveness, but they were originally designed to test discrete interventions such as a medication. The success of randomized controlled trials performed on more complex interventions may be mediated by local contexts. Implementation of what appear to be sound practices is often impeded by unforeseen or difficult to control variables such as variations in staffing or organizational receptivity to change. Complex interventions in hospitals such as stroke units or medical emergency teams require careful orchestration between units and across disciplines. Complex interventions in the community to improve care for diabetes, asthma, and other chronic conditions often include multiple components such as guidelines, patient education, communication between care providers, and the development of registries and information systems. Such interventions target multiple caregivers as well as patients. This research poses logistical challenges that require constant attention. Increasing effort is being paid to improving the design of trials for such complex interventions,7,8 but negative evaluations of complex interventions may occur in cases where such interventions are poorly defined or badly implemented, even if the underlying approach is sound.

One solution to such problems is greater investment in the design of the interventions before they are assessed. Campbell and colleagues suggest that researchers need to invest more in the development and evaluation of complex interventions.8 Such activities include specification of the intervention’s components, how they relate to each other, and how they influence proximal and final outcomes. They identify a number of techniques that are useful for such purposes including modeling or simulation techniques and qualitative research that focuses on identifying barriers to implementation. Another approach would be to use quality improvement methods to develop the interventions and identify barriers to implementation. The refinement of interventions using quality improvement tools would allow researchers to test well designed bundles of changes, identify barriers to change, and determine how to scale up the intervention efforts from a few organizations to many. Such quality improvement efforts would require careful monitoring and documentation of the changes in processes and the impact of specific interventions on outcomes9—activities that Davidoff and Batalden suggest including in reports of quality improvement work. The benefits of such analysis might be substantial, both in improving the design of the interventions and reducing the logistical barriers to implementation.

A second related contribution from quality improvement work is a more conscious focus on the types of strategies that may yield improvement in specific care environments. Large scale trials of clinical interventions are time consuming and costly, so careful planning of the interventions necessary to improve outcomes is essential. Here again some clinical trials experts have argued for greater attention to the choice of appropriate interventions. Efforts to improve care for heart failure patients through an educational intervention are unlikely to succeed if the key barrier is inadequate communication between community physicians and hospital staff. Quality improvement work may offer important data and insights on the effectiveness of specific interventions for specific quality problems. The use of Plan-Do-Study-Act cycles and the development of theories and predictions about the impact of changes on outcomes heighten the learning about specific improvement strategies.10 Reports on quality improvement efforts that provide details on such theories and predictions may offer substantial benefits.

A final contribution from linking quality improvement more closely with evidence based health care comes from the critical insight of improvement scholars on the importance of local adaptation. A key premise in improvement is that changes need to be tailored to fit local contexts. Sometimes these changes require minor tweaking, but in other cases the actual improvements themselves vary between sites. The work of the Northern New England Cardiovascular Study Group demonstrates the importance of this insight. The surgeons, nurses, and other staff in the sites engaged in this improvement research were committed to a common goal of improving key outcomes and used similar methods. But the changes introduced varied from site to site.11 Traditional randomized controlled studies have demanded detailed intervention protocols which are applied across intervention sites. By contrast, quality improvement has encouraged local experimentation with innovations to ensure that these are adapted to local needs. There is a growing interest by some clinical trials experts in recognizing the importance of local conditions and the need for greater flexibility to take into account the social, financial, and organizational barriers in local healthcare settings.12,13 Greater dialogue about how such pragmatic trials might evolve to systematically adapt interventions to address those local barriers might be a useful next step. Again, quality improvement provides the tools and the methods for such adaptation. Collaboration between those advocating more pragmatic clinical trials and those working with quality improvement methods would seem to hold promise.

Adherence by quality improvement researchers and practitioners to the guidelines proposed by Davidoff and Batalden will help to improve the quality of improvement research as well as quality improvement practice. By doing so, we may enhance collaborations with researchers who have seen quality improvement as anecdotal reports with limited samples and poor designs. Better reporting will also increase the usefulness of quality improvement research for those designing research to assess effectiveness.

Better reporting of quality improvement efforts could assist in the design of effectiveness research

REFERENCES

Footnotes

  • Funding: none.

  • Competing interests: none declared.

Linked Articles