Article Text

Download PDFPDF

Measurement of healthcare quality
The healthcare quality measurement industry: time to slow the juggernaut?
  1. T A Sheldon
  1. Correspondence to:
 Professor T A Sheldon
 Department of Health Sciences, University of York, Heslington, York YO10 5DD, UK;

Statistics from

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

It is time to pause and reflect on the degree to which performance measurement is acting optimally and in the interests of society and health

The last 10 years have seen an explosion of activity in the measurement of health care performance with the expenditure of huge resources on many different systems of data collection, analysis and reporting and the development of thousands of indicators. Large exercises have been undertaken by various quality organisations to develop, apply, and report the results of performance indicators. Examples include the National Quality Forum, the Joint Commission on Accreditation of Healthcare Organisations, the National Committee for Quality Assurance and, in the UK, the Healthcare Commission and Dr Foster. This has become a multi-million pound industry fuelled partly by increasing anxiety by society (especially its political representatives) about the variation in quality and safety of care—an anxiety heightened as the results of more measurements reveal even more problems. Whenever such an industry develops rapidly, it is useful to pause and reflect on the degree to which it is acting optimally and in the interests of society and health.


As with many new technologies in which people invest, hoping it will solve problems simply, the experience has been disappointing. A catalogue of problems has been reported related to everything from poor data quality and comparability, cost and collection burden, different priorities or perspectives among stakeholders, insufficient expertise and, most importantly, insufficient linkage with subsequent action. These are problems encountered in industry, but performance assessment and management is even more difficult in health care where there is greater dimensionality in organisational (including societal) goals. Health care is less deterministic and the link between actions and outcomes is much less direct than in most production processes, being modified or confounded by other activities, patient case mix, and other non-health care factors. The relationship with the customer is more complex than in many other services, and there is a wider range of stakeholders with non-compatible aims.

The performance measurement industry (public and private) takes as its starting point that “quality measurement and reporting is a powerful mechanism to drive quality improvement”.1 However, there is still little evidence of a positive impact on decision making, improvement in health service delivery, or health outcomes.2 We do not know the degree to which measurement and reporting by itself or linked to other processes results in improvements in quality and safety, not only as measured by the indicators used but also those aspects of care not necessarily measured by the indicators—that is, the overall effect. Groups busy developing “evidence-based indicators” do not appear to apply the same criteria to their own activity as they do to clinical practice. Given the immense resources going into this, it is astounding that there has not been more pressure to demonstrate impact and value for money. Just as new health technologies have to be rigorously evaluated for effectiveness and increasingly for cost effectiveness, so should performance measurement systems.3


Research on performance assessment systems that has been carried out is often of poor quality and naïve. Evaluations are usually tautological in the sense that the yardsticks used to evaluate the impact of performance assessment are the same potentially imperfect instruments used in the assessment itself. This reflects a more general problem of poor research into quality improvement.4 Experimental approaches have generally been eschewed in the quality improvement field. However, single group pre-test/post-test designs have low internal validity due to the absence of the counterfactual (what would have happened without the intervention).5 The results from different designs can give widely divergent results—the more rigorous the evaluations of continuous quality improvement, for example, the smaller the estimated impact.6 The point here is that evaluations should be aimed at convincing those who are sceptical or who will be asked to make serious investments or change their practices as a result, and not those who are already supporters. In addition, alongside more experimental approaches, researchers need to consider both the “whether” and the “why” questions in the same evaluations and this presents some interesting methodological challenges.

The performance indicator industry needs to move away from feeding the performance measurement “sausage machine” producing more and more sophisticated indicators. Instead, we need to consider more the effects of this activity on the quality and safety of organisations7 and also on the possible unintended effects.3 Indicators are not direct measures of performance, although they can be used to draw attention to issues that may need further investigation or flags to alert us to possible opportunities for improvement. In many cases considerable analysis, interpretation, and further investigation (drilling down) are required in order to understand properly what is happening, why, and what can be done to improve or sustain performance. The interpretation of variations in indicators may often be wrong, leading to inferences which are both misleading and unfair.8


What effect does the collection, publication, and use of performance data have on levels of trust and on other social and organisational features of healthcare delivery, the professions, patients and the public? No system of external measurement and auditing will be able to substitute for the relations of trust and professionalism which can also promote quality.9 The indicator industry has begun to suffer from the “regulators’ delusion” that central systems of oversight are the sole guarantors of quality and a bulwark against poor practice and performance. The contrary is true; most healthcare professionals have a common and natural concern with the benefit of their activities for patients. It is not the case that they only respond to formal evidence of performance and little else although, of course, these formal systems can make a significant difference if mainly at the margin.

The creative combination of oversight and active professional self-regulation is probably the best way forward. The promotion of professionally led clinical audit based on high quality clinical databases is one promising approach which can harness the enthusiasm of clinicians. As trust gets eroded in general and accelerated by the culture of measurement, comparison and exposure, one of the key policy and research questions for the industry is whether we can develop more trust promoting approaches rather than trust eroding ones.

It is time to pause and reflect on the degree to which performance measurement is acting optimally and in the interests of society and health