Article Text

Download PDFPDF

Choosing quality problems wisely: identifying improvements worth developing and sustaining
Free
  1. Christine Soong1,
  2. Hyung J Cho2,3,
  3. Kaveh G Shojania4
  1. 1 GIM, Mount Sinai Hospital, Toronto, Ontario, Canada
  2. 2 Quality and Safety, NYC Health and Hospitals, New York, New York, USA
  3. 3 Medicine, NYU Grossman School of Medicine, New York, New York, USA
  4. 4 Medicine, University of Toronto Faculty of Medicine, Toronto, Ontario, Canada
  1. Correspondence to Dr Christine Soong, GIM, Mount Sinai Hospital, Toronto, ON M5G 1X5, Canada; christine.soong{at}utoronto.ca

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

The years since launch of the Choosing Wisely Campaign1 2 have seen an increase in studies reporting interventions aimed at reducing low-value care, from unindicated imaging3 4 and laboratory tests3 4 to prescriptions for medicines5–7 that deliver no net benefit. Many describe use of some combination of the usual suspects of intervention types: education,5 8 performance feedback data (sometimes described as audit-feedback, social comparison or peer comparison), policy changes (eg, restricting release of blood products to 1 unit at a time based on a haemoglobin cut-off in non-haemorrhage situations)9 and computer provider order entry (CPOE)–based modifications (eg, alerts) or restrictions.10

In this issue of BMJ Quality and Safety, Ambasta and colleagues examined the impact of a social comparison and education intervention on routine blood test utilisation at a single academic medical centre.3 Trainees and attending physicians each received their own performance feedback in comparison with a group aggregate. Compared with controls, the intervention groups ordered fewer routine laboratory tests (incidence rate ratio 0.89; 95% CI 0.79 to 1.00; p=0.048) with an associated cost savings of $68 877 in Canadian dollars (p=0.020).

Before commenting on the intervention itself, the value of controlled comparisons bears noting. The statistical process control charts shown by Ambasta et al clearly demonstrate special cause variation, with an obvious reduction in the weekly mean number of target laboratory tests per patient-day. In addition, the reduction is temporally associated with the roughly 4.5-month intervention period and remains sustained during a 1-year post-intervention period. Yet, the three control sites show the same pattern. The results may thus constitute a case of secular trends, with a ‘rising tide lifting all boats’.11 The rising tide here presumably owes its origin to the widespread interest in eliminating low-value care, as with the Choosing Wisely …

View Full Text

Linked Articles