Article Text

Download PDFPDF

Unpacking quality indicators: how much do they reflect differences in the quality of care?
Free
  1. Jill Tinmouth
  1. Department of Medicine, Division of Gastroenterology, Sunnybrook Health Sciences Centre, 2075 Bayview Avenue, Room HG40, Toronto, ON M4N 3M5, Canada
  1. Correspondence to Dr Jill Tinmouth, Sunnybrook Health Sciences Centre, 2075 Bayview Avenue, Room HG40, Toronto, ON M4N 3M5, Canada; Jill.Tinmouth{at}sunnybrook.ca

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Just over 50 years ago, Avedis Donabedian published his seminal paper, which sought to define and specify the ‘quality of health care’, articulating the now paradigmatic triad of structure, process and outcome for measuring healthcare quality.1 In recent years, we have seen the rapid expansion of increasingly inexpensive information technology capability and capacity, facilitating the collection and analysis of large healthcare data sets. These technological advances fuel the current proliferation of performance measurement in healthcare.2 Increasingly, in an effort to improve care, many cancer health systems, including those in England,3 the USA4 and Canada,5 6 are publicly reporting performance indicators, generally derived from these large data sets. Not surprisingly, differences in prevention, early detection and/or treatment of cancer are often used to explain the observed differences in performance across jurisdictions.6–9

Given the considerable effort and resource invested in performance measurement as well as potential adverse consequences if done poorly,10 it is important to get it right. Determining the effectiveness of healthcare performance measurement is challenging,11 particularly at the health system level. Often, performance measurement is implemented uniformly across an entire system, making well-designed controlled analysis less feasible or impossible12 13 and leaving evaluations vulnerable to secular trends.14 At the physician level, audit and feedback studies report variable results: meta-analyses show a modest benefit overall,15–17 but an important proportion of interventions were ineffective or minimally effective with a few studies suggesting a negative effect on performance.16 Likely, this heterogeneity is due to the complexity of the endeavour and its many moving parts, which include the behaviour targeted, the recipients of the feedback, their environment, the use of cointerventions and the components of the audit and feedback intervention itself.18 The latter generally comprises performance indicators, often derived from large healthcare data sets; however, …

View Full Text

Linked Articles