Article Text
Statistics from Altmetric.com
What are international comparisons of healthcare quality for? Why and when should we want to compare the performance of health systems across countries and what should we do with the results?
International comparisons of quality, access, and cost in health care are all the rage. The publication of the World Health Organization’s World Health Report on health system performance in 2000,1 in which the health systems of 191 countries were ranked using an aggregate measure based on several dimensions—population health, health inequalities, responsiveness, distribution of responsiveness, and financial fairness—stimulated worldwide attention to the business of measuring and comparing health system performance and resulted in a storm of controversy. The WHO methodology was fiercely attacked and equally stoutly defended.2–4 The report aroused anger, especially among commentators from countries which had done badly such as the US. The US outspends almost every other country on health care and prides itself on the sophistication and enterprise of its health system, but was humiliatingly ranked 37th, bottom of all industrialised countries and below places such as Greece, Portugal, and Ireland. For healthcare providers in many countries like the UK who have been subjected to the publication of various hospital league tables, mortality comparisons and other performance measures over recent years, there was a certain “schadenfreude” to be had in watching the reactions of defensiveness, discomfort, and denial from national politicians and policymakers to the WHO report. It seems that nations respond to comparative performance data in much the same way as healthcare organisations.5
With rather less media attention, the Organisation for Economic Cooperation and Development (OECD)—which, for some time, has maintained an extensive database of comparative health system performance indicators—this year published an edited collection of papers on international comparisons of health system performance which comprehensively summarised the state of the art.6 It gives a cautious and contingent account of the conceptual and methodological challenges involved in developing and using performance measures across international boundaries, and provides an authoritative and highly readable insight into some of the solutions. But it still leaves largely unanswered the question of what international comparisons are for? Why and when should we want to compare the performance of health systems across countries and what should we do with the results?
“we have to keep reminding those who produce international comparative datasets and indicators that their value … must be the contribution they make to improvement”
The work of Marshall and colleagues in this issue of QSHC7 provides a neat case study illustrating this problem of the purpose of international comparisons. They show that taking indicators developed in one country and simply using them in another is probably inappropriate and unwise, given the differences in clinical practice and context which exist. In their study only about 56% of the quality indicators developed for 18 common primary care conditions in the USA made it into the indicators adapted for the UK. Furthermore, they found that differences in data collection systems and healthcare financing and organisation meant that straightforward comparisons were difficult. They conclude that transferring indicators is possible but needs to be done carefully and to take account of differences in context. But, valuable though these points are, they are all simply questions of methodology. The big issue—why should I want to take indicators developed in the US and use them elsewhere, and what would I learn from doing so which might improve the quality of care—is not explicitly addressed.
I would contend that the business of performance measurement is too often led by the technical and statistical wizards who develop the systems of measurement, and not by the ordinary people who need to use those systems of measurement to do their job in managing healthcare organisations. As a result, we get a lot of measurement but not much understanding, lots of data but little change. The measurement process is driven by the information and the clever measures we can build with it, not by our ideas about what needs improving in our healthcare system, and how we might do it.8
More radically, I would suggest that we don’t need international comparisons. Rather, what we need is international learning, by which I mean the capacity and capability for healthcare policymakers and others to learn from experience elsewhere—good and bad—using the health systems of other countries as the testing grounds for innovations before they are piloted or adopted at home.9 If that process needs comparative data, then all well and good, but the data and indicators should be an explicit product of the need for learning, targeted on the issue in hand. When international comparative data are assembled and presented on the off chance that they might reveal something interesting, it quickly degenerates into a fishing expedition for differences (and, if you look long and hard enough, you will always find some) or an exercise in point scoring and trumping.
The British NHS is more willing now than ever before to look abroad for ideas and lessons on improvement—an encouraging trend which hopefully might also be reflected in the future in other countries.10 For example, in planning to move towards a new system of “payment by results” for healthcare providers, the Department of Health in England has drawn heavily on the experience of other European countries such as Austria, Denmark and Norway in implementing case mix based reimbursement mechanisms and tariffs.11 With the aim of improving disease management in primary care, the Department of Health and its Modernisation Agency are bringing in expertise in IT and care pathways from a number of leading US health maintenance organisations.12 The British improvement collaboratives programme has drawn heavily for inspiration and expertise on the US experience of setting up and running collaboratives.13 In each of these endeavours, comparative data can and do play a supporting role, but are a tool rather than the purpose of the process.
If countries are serious about collaborating and learning from each others’ healthcare systems, then international comparisons of the quality of health care can be enormously valuable in directing and focusing that learning—highlighting and spreading good practice. But, if comparative data are used mainly to rank countries like teams in a football league,14 and to fuel a dialogue of the deaf about whose system is better than whose, it will be a profoundly unhelpful and unproductive use of resources which could be spent in so many better ways. Ultimately, we have to keep reminding those who produce international comparative datasets and indicators that their value and the sole metric of their worth must be the contribution they make to improvement.
What are international comparisons of healthcare quality for? Why and when should we want to compare the performance of health systems across countries and what should we do with the results?