Article Text

Download PDFPDF

Can evidence-based medicine and clinical quality improvement learn from each other?
  1. Paul Glasziou1,
  2. Greg Ogrinc2,
  3. Steve Goodman3
  1. 1Centre for Research into Evidence-Based Practice, Faculty of Health Sciences and Medicine, Bond University, Queensland, Australia
  2. 2Community and Family Medicine and Medicine, White River Junction VA Medical Center, Dartmouth Medical School, Hanover, New Hampshire, USA
  3. 3Oncology, Pediatrics, Epidemiology and Biostatistics, Johns Hopkins Schools of Medicine and Public Health, Baltimore, Maryland, USA
  1. Correspondence to Paul Glasziou, Centre for Research into Evidence-Based Practice, Faculty of Health Sciences and Medicine, Bond University, Gold Coast, Queensland 4229, Australia; pglaszio{at}bond.edu.au

Abstract

The considerable gap between what we know from research and what is done in clinical practice is well known. Proposed responses include the Evidence-Based Medicine (EBM) and Clinical Quality Improvement. EBM has focused more on ‘doing the right things’—based on external research evidence—whereas Quality Improvement (QI) has focused more on ‘doing things right’—based on local processes. However, these are complementary and in combination direct us how to ‘do the right things right’. This article examines the differences and similarities in the two approaches and proposes that by integrating the bedside application, the methodological development and the training of these complementary disciplines both would gain.

  • Quality improvement
  • evidence-based medicine

This is an open-access article distributed under the terms of the Creative Commons Attribution Non-commercial License, which permits use, distribution, and reproduction in any medium, provided the original work is properly cited, the use is non commercial and is otherwise in compliance with the license. See: http://creativecommons.org/licenses/by-nc/2.0/ and http://creativecommons.org/licenses/by-nc/2.0/legalcode.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Those working in healthcare are aware of the considerable gap between what we know from research and what is done in clinical practice.1 For example, enforced bed rest is ineffective in several conditions where it is still used; exercise reduces mortality in heart failure but is rarely used; and brief counselling after trauma is fashionable but ineffective. The response to this well-documented problem includes both Evidence-Based Medicine (EBM)2 3 and Clinical Quality Improvement.4 5

The term EBM was coined by Gordon Guyatt in 1992 for the JAMA ‘User Guide’ series to describe the bedside use of research to improve patient care. At the time it was common at McMaster teaching hospitals for patients' notes to include a key research paper relevant to their care, and for this to be discussed at ward rounds (personal observation—PG). Improved information technology allowed Dave Sackett's team to use an ‘evidence cart’ on ward rounds at the John Radcliffe in Oxford; the team asked and answered two questions for every three patients, with the searches changing 1/3 of clinical decisions.6 Different specialties and different individuals have adopted the principles of EBM to different degrees and in vastly different ways (Interviews on www.cebm.net/index.aspx?o=4648 illustrate this diversity).

In parallel, the Quality Improvement (QI) movement emerged to address similar problems, but with a focus on recurrent problems within systems of care. The first methods used in the National Demonstration Projects in the 1980s were adopted from those introduced by Deming into industry.4 But the methods soon evolved to the different environment of healthcare, with the founding of the Institute for Healthcare Improvement, and the development of the breakthrough series collaborative, the model for improvement, and other modifications and extensions of QI methods.

EBM and QI have had overall similar goals but focus on different parts of the problem. EBM has focused more on ‘doing the right things’: actions informed by the best available evidence from our clinical knowledge base (figure 1), whereas QI has focused more on ‘doing things right’: making sure that intended actions are done thoroughly, efficiently and reliably; however, these are complementary (figure 1) and in combination direct us how to ‘do the right things right’.7

Figure 1

Relationships between Quality Improvement (QI) and Evidence-Based Medicine (EBM). (a) sequence of EBM followed by QI; (b) EBM uses clinical knowledge to inform individual clinical decisions about patient care; (c) QI focuses on improving recurrent problems in the processes of care (Acronyms: GIN—Guidelines International Network; EPOC—Effective Practice and Organisation of Care Group; IHI—Institute for Healthcare Improvement; BEME—Best Evidence Medical Education).

Before ‘fixing’ an evidence–practice gap, we would be wise to ask ‘is this the right thing to do?’ Is there really a problem? Is there something effective we can do about it? For example, many QI initiatives in diabetes had aimed to achieve low HbA1c levels8 for which the evidence was weak, and subsequent large scale randomised trials (ACCORD and ADVANCE) have suggested may be unhelpful or even harmful.9

The ‘right things right’ process might be illustrated by a familiar clinical example. In the clinic of one of the authors, the primary care team was considering whether to try to wean elderly patients from long-term benzodiazepines being given for insomnia. We and the patients seemed reluctant. However, the team reviewed the evidence10 and found the risk of falls on benzodiazepines was higher than we'd expected and other adverse effects, such as cognitive loss, added to the problems. So cessation was the ‘right thing’, but how best to achieve this? A review of the controlled trials showed weaning was possible, but that simple (but very slow) tapering was as effective as more intensive methods such as cognitive therapy counselling. Finally, sending a structured letter to the patient (on why and how to withdraw) had also been shown to be effective.11 Without this evidence review by the clinical team, we might have wasted a lot of effort on ineffective means to achieve our goal. But without a clinical improvement process to change our practice, this knowledge might not have provoked action. Of course, QI would also suggest additional questions such as how many patients are on which benzodiazepines and for how long? How many falls or other adverse events have occurred on the benzodiazepines? What will occur with our patient's anxiety once we remove the medication?

This article aims to look at what each of these disciplinary areas might learn from the other. The difference in approach of the two disciplines may be better understood by looking at the problem each perceives it is addressing.

The EBM perspective

One cause of the evidence–practice gap is information overload, for example, approximately 8000 references—including around 350 randomised trials—are added to MEDLINE each week. But only a small fraction of this is research that is sufficiently valid and relevant to change practice. So keeping up to date with new developments and information is problematic. One arm of EBM has been to synthesise and summarise this flood of research, and be able to access evidence wherever and whenever it is needed. To achieve this requires both ready access (to resources such as MEDLINE and the Cochrane Library) and skills (in finding, appraising and applying evidence) that few healthcare workers currently have. The EBM movement has focused on developing both the skills and tools to better connect research and clinical practice, with some12 but not universal successes.

A particular focus of EBM has been to take a more sceptical approach to innovation, asking for clear evidence before changing practice. Given that few innovations represent a real advance, this cautious approach means less disruption arising from unnecessary changes in practice.13

The QI perspective

The problem addressed by the QI approach might be characterised as the ‘knowing–doing’ gap: we know (individually and collectively) what to do, but fail to do it or fail to do it correctly. The gap has many causes, from simple lack of (some individuals) knowledge about what should be done, to uncertainties about the ‘how to’. For example, we may know that certain groups of patients should receive influenza vaccine but fail to do this because the system does not encourage reliable administration of the vaccine.

For example, at one institution, the electronic medical record (EMR) was fully implemented for many years, so the physicians-in-training, the staff physicians and nurses trusted the EMR. On discharge from the hospital a prompt in the EMR enquired whether a patient needed to receive the influenza vaccine. No one realised that this prompt led to a blind alley—no vaccine was ordered and no vaccine was given. The individuals (and the EMR) knew that influenza vaccine was the ‘right thing to do’, but the EMR and the culture of trusting the EMR inhibited ‘doing the right thing’. Fixing this problem proved to be a challenge—requiring several Plan–Do–Study–Act iterations. The simple change in the EMR system and the influenza vaccine ordering were low on the priority list for the IT support group. This necessitated that the care team create a work-around and learn how to reliably ‘do the right thing’ in the context and setting of care.14 Even an intervention as simple as administering influenza vaccine in a fully integrated EMR environment required a thorough knowledge of the system and then creativity in dealing with its limitations—much more than just the knowledge of the ‘right thing’.

The techniques of QI have focused on integrating what we know and moving it to how we should be doing it. Common techniques to achieve this are reminder systems (paper or computer), simplification of processes (reducing unnecessary steps), structuring the system to do the right thing at the right time and place (putting hand-washing equipment and/or instructions in the right place) and testing new interventions to determine what works in this particular setting. This task is often a creative and iterative process of identifying barriers, and working out solutions to overcome these.

Marrying EBM and QI

As illustrated by the above examples, EBM is involved in the early stages of checking the validity and applicability of the available evidence to the clinical problem. This involves the traditional ‘four steps’ of EBM illustrated in figure 2. QI processes15 may be triggered at the fourth step if it seems likely that the clinical problem is a common one for which the current system of practice is not optimal.16 Similarly, in the planning stage of a QI project there may be several questions that trigger an EBM cycle to check for evidence.

Figure 2

Proposed linkage between EBM and one model for QI.

In addition to this merging of EBM and QI processes, there are deeper organisational and epistemological issues in common which we briefly discuss in the next section.

The evidence for EBM and QI

A common criticism levelled at both EBM and QI is that there is only weak evidence that either process makes a difference to patient outcomes. However, neither EBM nor QI are a single ‘thing’ and cannot be evaluated in the same way as a fixed dose of a single chemical entity. Rather they are disciplines with multiple techniques that may be used to address the research–practice gap. For example, we know from large randomised trials that aspirin lowers mortality after myocardial infarction, but is underused; we would therefore want to improve appropriate usage. EBMers might focus on the ‘appropriate’ part (subgroups, balance of benefits and harms, etc); QIers might focus on the usage part (barriers, prescribing systems, etc). But success here could be largely judged by process measures—an increased use in aspirin—rather than in-hospital mortality. The link to mortality has already been proven in the trials. Hence, rather than enquire, ‘what is the evidence that EBM (or QI) are beneficial’, we should instead ask what are the best techniques within each discipline for achieving better practice? In order to improve, we need to know in what circumstances do those techniques work and how we can we disseminate those techniques?

The problems with assessing interventions by process or outcome measurements is illustrated by a recent systematic review17 of QI ‘collaboratives’ that focused on increasing the use of surfactant in premature infants. A collaborative (sometime called ‘breakthrough collaborative’) is a consortium of 20–40 healthcare organisations using QI methods on a specific healthcare quality problem. The review found nine eligible studies, including only two randomised trials. The reviewers concluded that the evidence for collaboratives was ‘limited’ because the first trial showed no effect, and the second trial showed ‘significant improvement in two specific processes of care but no significant improvement in patient outcomes (mortality and pneumothorax)’. However, their conclusions may ask too much of a trial's ability to detect real changes in outcomes. The improvement in processes of care included a substantial increase in surfactant use from 18% to 54% (a 36% increase). However, given the pooled trials of surfactant, which included 1500 randomised infants,18 was barely enough to demonstrate the mortality reduction, expecting to detect a mortality reduction by the collaborative is unrealistic. With the 36% improvement in surfactant use seen, to reliably detect the predicted mortality reduction, we would require a trial of collaboratives about nine times larger (9×1500), that is at least 13 500 infants individually randomised. This may be infeasible and unnecessary. If both steps (surfactant effectiveness and the QI process improvement: see figure 1) separately have strong evidence, then this represents a ‘complete causal chain’ whose evidence is equal to the evidence of the weakest link.19 The more important question here then is ‘What elements of the neonatal collaboration were important?’ and ‘How well will that transfer to other settings?’

By shifting the focus to specific methods we may ask more focused and answerable questions. For example, how effective are reminder systems to reduce ‘slips’? A systematic review of trials of reminders is a valuable resource for QI practitioners wanting to know when and how they do or don't work?20 Similarly, having to support a team's evidence-based practice, a ‘clinical informaticist’ is intuitively appealing and ‘do-able’,21 but evaluative trials, which show a positive impact of clinical informaticists on clinical decision making, are relatively recent.22 Some techniques, such as a QI collaborative, may be a complex mix of techniques and therefore intrinsically more difficult to evaluate. Evaluation is still worthwhile, but may shift focus to understanding what methods a particular collaborative used, and what seemed to work or not and in what circumstances.

Finally, the epistemology of both disciplines is evolving. A better understanding of the science and scientific rules of both areas will be important for their continued growth and impact. For example, an early but simplistic interpretation of EBM was that all interventions required randomised trial evidence. While important, we now recognise that different types of questions need different types of evidence,23 and that even for treatment questions occasional evidence from non-randomised studies, including some case series, can be sufficiently compelling.24

A way forward

Early QI methods in healthcare incorporated a link to evidence, but this connection seems to have faded over the years. In the early 1990s, The Hospital Corporation of America (HCA) developed and used FOCUS-PDCA that explicitly included a detailed analysis of the evidence for proposed changes, the processes and the data about local performance.25 This methodology developed into the PDSA cycle,15 a common, simple and effective technique, but one where the connection to evidence is less clear. We propose that re-establishing a clear connection between EBM and QI will benefit both disciplines and, ultimately, benefit patients and families. For those engaged in either QI or EBM (or hopefully both!) there are several implications, both epistemological and practical, of the complementary focus and methods of the two disciplines.

Those working in QI teams, before taking on a change should routinely check the validity, applicability and value of the proposed change and should not simply accept external recommendations. (Corollary: At least some members of a QI team must have high level skills in EBM.)

Those working in EBM should recognise that it is not sufficient to simply appraise the evidence, but at the end we should ask ‘what is the next action’16 (and sometimes enter a PDSA cycle) (Corollary: At least some members of an EBM team will need high level skills in QI.)

Those working on the methods of QI and EBM should stop being so concerned about whether the abstract concepts of EBM or QI ‘work’, and instead focus on development and evaluation of specific methods of each that sheds light on what elements are most effective in what circumstances. This evaluation should involve two related processes. First, recognise that ‘experiential learning’ is a cyclic process of doing, noticing, questioning, reflecting, exploring concepts and models (evidence), then doing again—only doing it better the next time (PDSA cycles).26 Second, when new potential generalisable techniques are developed, then these should be subjected to a more formal evaluation. Recently, several stages of evaluation specific to surgery have been proposed,27 which recognise the development and learning needed before a full evaluation. Related problems have been recognised in applying the Medical Research Council (MRC) complex interventions framework for health improvement.28 However, some creative tension between doing, developing and evaluating will always exist.

Finally, those teaching the next generation of clinicians should value both disciplines, which should be taught, integrated and modelled in clinical training.29 Medical curricula, undergraduate and postgraduate, and healthcare organisations should incorporate both EBM and QI training and these should be taught as an integral whole. Such training requires learning background skills and theory, but also ‘bedside’ teaching and modelling of how EBM and QI are applied in real clinical settings. By integrating the bedside application, the methodological development, the training and the organisation support of these complementary disciplines, hopefully, we can ever more frequently do the ‘right things right’.

Acknowledgments

We would like to thank Paul Batalden for initiating this topic, and Frank Davidoff for helpful comments.

References

Footnotes

  • Funding This material is based on support and use of facilities at the White River Junction VA from the VA National Quality Scholars Program; Dr Glasziou is supported by an NHMRC Fellowship.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.