Article Text

Download PDFPDF

Does telling people what they have been doing change what they do? A systematic review of the effects of audit and feedback
  1. Gro Jamtvedt1,
  2. Jane M Young2,
  3. Doris T Kristoffersen1,
  4. Mary Ann O’Brien3,
  5. Andrew D Oxman1
  1. 1Norwegian Knowledge Centre for the Health Services, Oslo, Norway
  2. 2Surgical Outcomes Research Centre, Central Sydney Area Health Service, Sydney, New South Wales, Australia
  3. 3McMaster University and Supportive Cancer Care Research Unit, Juravinski Cancer Centre, Hamilton, Ontario, Canada
  1. Correspondence to:
 G Jamtvedt
 Norwegian Knowledge Centre for the Health Services, PO Box 7004, St Olavs plass, Oslo N-0031, Norway; gro.jamtvedt{at}nokc.no

Abstract

Background: Many people advocate audit and feedback as a strategy for improving professional practice. The main results of an update of a Cochrane review on the effects of audit and feedback are reported.

Data sources: The Cochrane Effective Practice and Organisation of Care Group’s register up to January 2004 was searched. Randomised trials of audit and feedback that reported objectively measured professional practice in a healthcare setting or healthcare outcomes were included.

Review methods: Data were independently extracted and the quality of studies were assessed by two reviewers. Quantitative, visual and qualitative analyses were undertaken.

Main results: 118 trials are included in the review. In the primary analysis, 88 comparisons from 72 studies were included that compared any intervention in which audit and feedback was a component to no intervention. For dichotomous outcomes, the median-adjusted risk difference of compliance with desired practice was 5% (interquartile range 3–11). For continuous outcomes, the median-adjusted percentage change relative to control was 16% (interquartile range 5–37). Low baseline compliance with recommended practice and higher intensity of audit and feedback appeared to predict the effectiveness of audit and feedback.

Conclusions: Audit and feedback can be effective in improving professional practice. The effects are generally small to moderate. The absolute effects of audit and feedback are likely to be larger when baseline adherence to recommended practice is low and intensity of audit and feedback is high.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Audit and feedback is widely used as a strategy to improve professional practice. It appears logical that healthcare professionals would be prompted to modify their practice if given feedback that their clinical practice was inconsistent with that of their peers or accepted guidelines. Yet, feedback has not been found to be consistently effective.1–8 We updated a previous Cochrane review to deal with the following questions7:

  • Is audit and feedback effective in improving professional practice and healthcare outcomes?

  • How does the effectiveness of audit and feedback compare with that of other interventions, and can it be made more effective by modifying how it is done?

METHODS

We identified relevant articles in the Cochrane Effective Practice and Organisation of Care register and pending file in January 2004. We also examined the reference lists of retrieved articles.

We included randomised controlled trials involving healthcare professionals. Audit and feedback was defined as “any summary of clinical performance of healthcare over a specified time period”. We included only those studies that objectively measured provider performance in a healthcare setting or healthcare outcomes.

Two reviewers (GJ and JMY) independently selected studies for inclusion, extracted data and assessed the quality of the study.7 An overall quality rating (high, moderate, low protection against bias) was assigned on the basis of the following criteria: concealment of allocation, blinded or objective assessment of primary outcome(s), and completeness of follow-up (mainly related to follow-up of professionals) and no important concerns in relation to baseline measures, reliable primary outcomes or protection against contamination. We assigned a rating of high protection against bias if the first three criteria were scored as done, and there were no important concerns related to the last three criteria, moderate if one or two criteria were scored as not clear or not done, and low if more than two criteria were scored as not clear or not done.

We considered audit and feedback in addition to interactive, small group meetings separately from audit and feedback in combination with written educational materials or didactic meetings, which have been found to have little or no effect on professional practice.1,9,10 We defined “multifaceted” interventions as including two or more interventions.

We categorised the intensity of feedback as “high”, “moderate” or “low” on the basis of combinations of the following components: recipient, format, source, frequency, duration and content of the feedback.

The complexity of the targeted behaviour and the seriousness of the outcome were categorised in a subjective manner independently by GJ and JMY, or GJ and ADO as “high”, “moderate” or “low”. Baseline compliance with targeted behaviour for dichotomous outcomes was based on the mean value of pre-intervention level of compliance in both the audit and feedback group and the control group.

Analysis

We included only studies of moderate or high quality in the primary analyses, and studies that reported baseline data. Three analyses were conducted across all types of interventions (audit and feedback alone, audit and feedback with educational meetings or audit and feedback as part of a multifaceted intervention compared with no intervention): one using the adjusted risk ratio as the measure of effect, one using the adjusted risk difference and the third using the adjusted percentage change relative to the control mean after the intervention. All outcomes were expressed as compliance with the desired practice. Professional and patients’ outcomes were analysed separately.

We considered the following potential sources of heterogeneity to explain variation in the results:

  • the type of intervention

  • the intensity of the feedback

  • the complexity of the targeted behaviour

  • the seriousness of the outcome

  • baseline compliance with desired practice

  • study quality

We visually explored heterogeneity by preparing tables, and bubble and box plots to explore the size of the observed effects in relationship to each of these variables. We also plotted regression lines to aid the visual analysis of the bubble plots.

The visual analyses were supplemented with meta-regression to examine how the size of the effect was related to the six potential explanatory variables, weighted according to the number of healthcare professionals. The main analysis comprised a multiple linear regression using only main effects; baseline compliance was treated as a continuous explanatory variable and the others as categorical. These analyses were conducted using generalised linear modelling in SAS.

As there were important baseline differences in compliance between the intervention and control groups in many studies, our primary analyses were based on adjusted estimates of effect, where we adjusted for baseline differences in compliance.

RESULTS

There are 118 trials in the review, including 30 new studies in this update. In all, 44 studies were classified as high quality and 14 as low quality, with the rest scored as moderate quality.

A total of 88 comparisons from 72 studies with more than 13 500 health professionals compared audit and feedback alone or audit and feedback as a component of an intervention to no intervention. It included 64 comparisons of dichotomous outcomes from 49 trials, and 24 comparisons of continuous outcomes from 23 trials. The adjusted relative risk (RR) of compliance with desired practice varied from 0.71 to 18.3 (median 1.08, interquartile range 0.99–1.30), and the adjusted risk difference of compliance with desired practice varied from −0.16 (a 16% absolute decrease in compliance) to 0.70 (a 70% increase in compliance; median 0.05, interquartile range 0.03–0.11). For continuous outcomes, the adjusted percentage change relative to control varied from −0.10 to 0.68 (median 0.16, interquartile range 0.05–0.37).

Baseline compliance and intensity of audit and feedback were identified as significant in the multiple linear regression of the adjusted RR (main effects model). The estimated coefficient for the baseline was −0.005 (p = 0.05), indicating smaller effects as baseline compliance increased (fig 1). The intensity of audit and feedback may also explain some of the variation in the relative effect (p = 0.01; fig 2). For analyses of adjusted risk difference (RD) and continuous outcomes, none of the variables that we examined helped to explain the variation in relative effects across studies.

Figure 1

 Plot of adjusted relative risk (RR) versus baseline compliance, excluding one study. A & F, audit and feedback; Edu, education.

Figure 2

 Box plot. Adjusted relative risk (RR) versus intensity of audit and feedback (A & F), excluding one study. Edu, education.

In the exploratory analysis of adjusted RD, we pooled studies including audit and feedback with or without educational meetings into one category. In the analysis of interaction between the intensity of audit and feedback and the type of intervention, the type of intervention helped to explain the observed variation in the absolute effect (p = 0.001; fig 3). The estimated mean adjusted RD not adjusted for other terms in the model was 2.1 for studies of audit and feedback with or without educational meetings, whereas it was 9.2 for multifaceted intervention. Intensity of audit and feedback may also help to explain the variation in the absolute effect for adjusted RD in this analysis (p = 0.04).

Figure 3

 Box plot. Adjusted relative difference (RD) versus intervention type, excluding one study. A & F, audit and feedback; Edu, education.

Audit and feedback combined with other interventions compared with audit and feedback alone

A total of 35 comparisons from 21 trials compared various combinations of interventions including audit and feedback with audit and feedback alone. Adding reminders,11–14 incentives,15,16 outreach17–19 or opinion leaders20–22 to audit and feedback showed mixed results, but no consistent increase in effect was found by adding any of these interventions to audit and feedback. Similarly, the addition of self-study, a practice-based seminar, patient education materials, assistance to develop an office system or a recall system or quality improvement tools did not increase the effectiveness of audit and feedback alone.23–28

Audit and feedback compared with other interventions

Eight comparisons from seven trials compared audit and feedback with other interventions. Reminders improved practice more than feedback in two studies,13,14 but patient education was not found to be better than feedback in a trial to improve prescribing of antibiotics.25 A practice-based seminar was not more effective than feedback to improve compliance with guidelines for magnetic resonance imaging of the lumbar spine and knee,24 and feedback or self-study had the same effect on the percentage of patients with controlled blood pressure in another study.23 In one study that compared feedback with incentives, the doctors in the incentive group reduced the mean number of tests order scored by 50%, whereas those in the feedback group did not change as much.29 A local opinion leader group reduced caesarean section rates more than an audit and feedback group in another study.30

Different types of audit and feedback

Seven studies compared different ways of providing feedback. Three studies compared feedback with and without peer comparison without finding any difference between groups.31–33 Feedback on medication compared with feedback on performance resulted in no difference in control of blood pressure.34

In one study, mutual visits and feedback by peers was compared with visits and feedback by a non-physician observer to improve performance related to 208 indicators of practice management.35 Both programmes showed improvements in some aspects of care after a year, but the improvement was more noticeable after mutual practice visits than after a visit by a non-physician observer.35 Ward et al19 compared audit and feedback complemented by outreach by either a doctor or a nurse. The groups did not differ significantly after intervention in process of care score for diabetes (adjusted post difference = 0.5).

No difference in prophylaxis for venous thromboembolism was found in a study comparing group feedback with group and individual feedback.36

DISCUSSION

Audit and feedback can be a useful intervention, but the effects of audit and feedback vary widely from an apparent negative to very large positive effect in the trials included in this review.

For dichotomous outcomes, baseline compliance helped to explain the variation in the relative effectiveness across studies. However, the relative effectiveness did not increase dramatically with decreasing baseline compliance (a change of 0.05 in the adjusted RR relative to a decrease of 10% in the baseline compliance). There was also more variation in the adjusted RRs when baseline compliance was lower (fig 1). The intensity of audit and feedback also appeared to explain variation in the adjusted RR for audit and feedback with or without educational meetings. In multifaceted interventions the contribution of audit and feedback was often small and the effectiveness of multifaceted interventions may depend more on components of the intervention than audit and feedback. We did not find many head-to-head comparisons of different intensities of feedback and such studies are needed.

On the basis of earlier reviews,1,10 we have considered printed educational materials to have little or no effect on changing professional practice. However, a recent major review on guideline implementation strategies8 found that printed educational materials might have an effect. This presents a problem in interpretation of our results as we have considered printed materials as no intervention. This may lead to an underestimation of the effect of audit and feedback in studies that compared audit and feedback alone with printed materials, but also to an overestimation of the effect of audit and feedback in studies where audit and feedback along with printed materials are compared with no intervention.

We did not find a significant difference in the relative effectiveness of different types of interventions, but when we combined audit and feedback alone and audit and feedback with educational meetings into a single category, the absolute effect (adjusted RD) was significantly larger for multifaceted interventions than for the combined category. However, the difference in the median adjusted RD is small and the ranges of RDs are overlapping (fig 3). These findings are more consistent with the conclusions of a review of interventions to implement clinical practice guidelines8 than they are with an earlier overview of systematic reviews of interventions to change professional practice.1

Seven studies provided direct, randomised comparisons of different ways of providing audit and feedback. On the basis of these comparisons and indirect comparisons across studies, it is not possible to determine what, if any, features of audit and feedback have an important effect on its effectiveness. Although there are hypothetical reasons why some forms of audit and feedback might be more effective than others, there is no empirical basis for deciding how to provide audit and feedback. There is a need for well-designed process evaluations embedded in trials to explore and provide insights into the complex dynamics underlying the variable effectiveness of audit and feedback.

We found only seven studies of audit and feedback compared with other interventions. The results of the two comparisons of audit and feedback with reminders13,14 are consistent with the conclusions of Buntinx et al37 that both can be effective, and do not provide strong support for either being clearly superior, although the reminder group performed better than the audit and feedback group in both of these studies. To the extent that these results can be considered reliable, they support Mugford et al’s conclusions that feedback close to the time of decision making is likely to be more effective,3 as reminders by definition occur at the time of decision making.

The evidence presented here does not support mandatory use of audit and feedback as an intervention to change practice. However, audit is commonly used in the context of governance, and it is essential to measure practice to know when efforts to change practice are needed. In these circumstances, health professionals may receive feedback without explicitly having the responsibility to implement changes on the basis of that feedback. The effects of audit and feedback may be larger when health professionals are actively involved and have specific and formal responsibilities for implementing change.

CONCLUSIONS

Audit and feedback can be effective in improving professional practice, but the effects are generally small to moderate. The absolute effects are more likely to be larger when baseline compliance with recommended practice is low and, for audit and feedback with or without educational meetings, when feedback is provided more intensively.

Acknowledgments

We thank Dave Davis, Brian Haynes, Nick Freemantle, Emma Harvey and Cythia Fraser for their contributions to the first version of this review. We also thank Jessie McGowan for conducting searches for this update.

REFERENCES

Footnotes

  • Competing interests: None declared.

  • Further information: A table of all included studies and results tables is available on request from the corresponding author.