Article Text

Download PDFPDF

Quality measures: bridging the cultural divide
  1. Liam J Donaldson1,
  2. Ara Darzi2
  1. 1Department of Surgery and Cancer, Division of Surgery, Imperial College, London, UK
  2. 2Department of Surgery and Cancer, Institute for Global Health Innovation, Imperial College, London, UK
  1. Correspondence to Dr Liam J Donaldson, Department of Surgery and Cancer, Division of Surgery, Imperial College London, Room 1090a, 10th Floor, QEQM Building, St Mary's Hospital, Praed Street, London W2 1NY, UK; l.donaldson{at}imperial.ac.uk

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

‘You can't expect Joe Six-Pack to be Bob Brook.’ That was an off-the-cuff remark thrown into a heated debate about publicly reported high-quality data at one of the ‘Pennyhill Park’ health policy annual events hosted by the Commonwealth Fund and the Nuffield Trust.1 Robert H Brook, former director of RAND Health, has been a colossus in the field of healthcare quality measurement for over 30 years.2

The sentiment expressed in that remark captures the essence of the dilemma of establishing a policy on the collection and use of data to assess the quality of a healthcare system. We need enough of the right kind of data to draw reliable and valid conclusions about the performance of a hospital or health service, but the resulting analysis cannot be so technically challenging as to overwhelm its users, including the potential recipients of care and managers of the health system. Most chief executive officers in the English National Health Service (NHS) are not health professionals, and vanishingly few are sophisticated health services researchers. Attempts to overcome this problem with less complex data and messages may end up as simplistic and misleading.

Meyer et al3 in this issue of the Journal, express concern at the ‘sky-rocketing’ number of quality measures that are now required for accountability purposes in the US healthcare system, and predict that they could easily move over the next few years from the current hundreds of metrics to thousands.

We compare their critique of the place of quality measures in the US healthcare system to the context of the NHS as it undergoes fundamental redesign to its structures and accountability mechanisms.

Money and activity versus quality: a false dichotomy

In England, the NHS has been slower to grasp the challenge of measuring quality. Until recently, policy makers have been content to pride themselves on having moved to a position in which quality improvement is a central goal of the system. Laudable though this has been, the dangers of espousing quality as a goal strategically without considering how to operationalise it properly (including measurement) have been evident in spectacular failures in standards of care when the much more easily quantifiable measures—money and activity—were the true priority for managers of the system.4 ,5

This issue has persistently rankled with professional staff delivering front-line care. No matter how often the language of quality and safety is spoken by those assessing performance, the true lingua franca of healthcare is financial. Many clinicians in Britain remain unshakeable in their belief that what really matters to patients or themselves is not the same as what influences those running the service. The portrayal of this as a difference in values further deepens the rift in perceived perspectives. The dysfunction has arguably been worsened by the NHS policy of Payment by Results (PbR)6 introduced in 2003. In most cases, this links payment to volume of activity, although there are a growing number of ‘Best Practice Tariffs’ incorporating quality measures, such as rapid access to a CT scan for those who have suffered a stroke.7 PbR has made clinicians aware of the way in which data derived from patient records are linked to financial reward for their institutions. Meanwhile, the absence of a comparable set of powerful data on the quality of care provided—data that are universally believed and trusted—sustains this divide between managerial and clinical cultures.

NHS hospital organisations are now required to produce a set of quality accounts together with their financial report, but the level of parity the board places between them remains to be seen. The argument that the business plan of a hospital and the quality plan might be one and the same document can bring a glimmer of enlightenment to even the most hardened managerial traditionalist, but it has not yet translated to a paradigm shift in the concept of accountability and how performance is judged.

Clinically curated versus administratively derived data

The successful development of widely accepted and extensively used quality measures in the NHS has been limited to particular fields of care. The Myocardial Ischaemia National Audit Project (MINAP)8 is a prominent example. All hospitals in England and Wales that admit patients with acute coronary syndromes provide data. This has allowed a rich and detailed description of the quality of care in different centres and over time. It has allowed services to be held to account for standards set in the National Service Framework for coronary heart disease.9 It has been embraced clinically and managerially, and other initiatives involving rigorous collection of specified quality data in heart disease have enabled public reporting of service performance as well as non-emotive public debate about variations in standards of care.10 However, the number of truly clinically led national data collection systems is limited and restricted to major conditions.11

By contrast, the derivation and use of a high-level summary index of quality, such as the Hospital Standardised Mortality Ratio, from routinely available data has been dogged by controversy.12 Views have been polarised between those who point to the measure's indispensability in uncovering hitherto hidden problems in a service, and those who consider that they should defend, to the hilt, reputations from a statistic that they consider completely flawed. The time-honoured middle-ground position, that such a measure is only a starting point for further investigation, has held little sway in the heat of recent public discourse about measuring the quality of a hospital's services.

The main source of NHS information on hospital patients, from which this and similar summary measures of quality are derived is Hospital Episode Statistics. This system has been criticised for substandard data quality.13 ,14 The quality of routinely available data is not a good reflection of clinical practice, and clinicians show little interest in it. Indeed, in one study, only 8% of clinicians participated in the validation of clinical coded data.13 Poor quality data lead to lack of confidence in them, and ultimately rejection of findings derived from them. A project undertaken by the Royal College of Physicians of London and the Department of Health in England sought to secure greater clinical engagement in routinely available data systems with the aim of improving their quality, use and relevance.15 It found that clinicians were wary that such data might be used for judging them because the scope did not encompass outpatients where much of their work was based, and because modern care is delivered by teams so that consultant-specific quality measures are not reliable.16

The strength of clinical specialty or condition-specific systems, like MINAP, that capture quality measures is that they are mainly clinically curated. Thus, the data selected for inclusion are relevant and outcome orientated; capture of data is comprehensive, and they are less disputed by outliers as often happens with data gathered under management auspices. When the data standards have been set by your peers, it is harder to wave away, or excuse, inconvenient statistics about your service.

NHS system reforms require good data

A commitment to making quality and safety the ‘organising principle’ of the NHS was made by Prime Minister Brown's Government in its White Paper, High Quality Care for All.17 Background work for this strategy found that the highest performing clinical teams used quality data routinely.18 In one of the leading hospitals in the USA, on one quality-ranking scheme, the approach to gaining full clinical engagement has been to make the use of data a credible ‘scientific’ endeavour rather than an activity required by management (Pronovost P; personal communication). Creating a clinical culture in which such ‘dataphilia’ among clinicians in the NHS is commonplace is a major challenge, but if it could be done, there would be major gains. The reasons why it is not so, currently, are again deep seated and include: limited exposure in undergraduate and postgraduate medical education programmes to quality-of-care concepts; a lack of confidence in the validity of routine data; the absence of professional leadership in embedding it as a core component of good clinical practice; and the low value of health services research (using such data) in career advancement compared with clinical and molecular research.

Meyer et al3 call for greater selectivity and for counting things that matter. They argue for metrics that serve the needs of end users (patient, families, payers) for accountability of performance and for judging quality and value while achieving parsimony in the number and type chosen. It is important that there are such system-wide indicators that provide insight into the quality and safety of care being provided, whether that ‘system’ is the whole provision of care in a country or that delivered in a hospital or primary care service. However, it is equally important that front-line clinical teams also have data that they use to compare themselves with the best, and to assess whether they are improving over time. Those data will encompass many more measures of quality and will relate closely to the specific context of the area of care. Health policy makers are not always sure whether to encourage clinicians to have as much data as they feel they need in order to evaluate their services. There is also uncertainty on how much of such data should be extracted for management oversight, or rolled into summary measures for monitoring system performance. Moreover, there is a paucity of good methodological work on how best to aggregate information from a diversity of quality measures.

Major reforms to the NHS in England, now being implemented, create an urgent need for good data in two areas. First, with plans to devolve most decisions about the planning and funding of care for populations to local level in entities called Clinical Commissioning Groups, the previous management bodies that controlled these functions will disappear. Accountability for the delivery of provider-contractual requirements, for assessing return on public investments and for seeking equity of care, will only be possible through data. Second, the new system relies on choice by patients. This too will require data.

The coalition government that came to power in 2010 set out these reforms in its White Paper, Equity and Excellence: liberating the NHS.19 This acknowledges the importance of data on quality and sets out proposals for an NHS Outcomes Framework covering five outcome domains (premature death, quality of life for chronic disease, recovery from illness or injury, experience of care, safety), and a comprehensive suite of standards to support the framework. However, the focus on outcome over process metrics,20 the centrally defined nature of the framework, and the lack of widespread clinical consensus have led to criticism. A library of some 150 standards is to be developed over the next 5 years by the National Institute for Health and Clinical Excellence, which may generate a more clinically plausible, if not lean, set of standards.21

After a series of horror stories about staff attitudes towards care, often when looking after elderly or vulnerable patients,22 the measurement of patient experience has gained a renewed focus within the NHS. Although there are existing detailed national random surveys of experience, asking a broad array of questions, the English NHS is now pushing the notion of a simple, single question about experience borrowed from a consumer notion of customer service: would you recommend to family and friends the service you have received? However, in this drive towards simplicity, there have been further voices of discontent. In reviewing the approach, the Picker Institute suggested that many people thought of asking a cancer survivor whether they would recommend their service—a potentially insensitive question to ask23—demonstrating that simplicity alone is not always acceptable.

Data that measure the quality of healthcare are needed for accountability, consumer choice and quality improvement. In their development, the fundamental tension of performance measurement comes to the fore. Metrics must be acceptable to clinicians, collectable from management systems and understandable by the public. A simple triad that is hard to reconcile.

Conclusion

The longstanding orientation of the US healthcare system towards billing, claims documentation, accreditation of providers, scrutiny by payers, and public reporting of outcomes has ensured a commitment to the development of data systems to measure clinical service performance. As Meyer et al3 put it: ‘Our investments in required quality measures have served us well.’ They rightly argue for careful selection from among the large volume of available quality measures, as well as for keeping the numbers used down to manageable levels.

The NHS is in a very different place. For the first 50 years of its existence, quality was implied but not made explicit. When, in the late 1990s, frameworks and programmes were created to promote higher quality and safer care, measurement trailed behind. As a result, the things that could be measured (ie, money and activity) were the focus of management. When major failures in standards of care did occur, investigators pointed out that the quality rhetoric did not match the realities of patient care. The NHS has lacked a set of clinical data that are comprehensive, trusted and that provide deep insights into the quality and safety of care. The culture of care does not currently embrace the centrality of data to a health professional's work. As a priority, clinicians need access to good data that give them insights into the quality and safety of care that they are giving to their patients. This will have to be detailed and specific to the context of their field of care. Getting there will mean a transformation of culture and attitudes that in turn will require strong leadership.

The quality of NHS administratively derived data remains patchy. Ironically, databases established and run by clinical groups have been very successful, overcoming problems of clinical engagement, trust in the information they provide and intelligibility. However, they have remained somewhat apart in visions of the future, as the focus has shifted to deriving measures from the electronic health record. Clarity of strategic thinking is required to sustain the best of the clinical databases, engage clinicians in strengthening routinely available administrative data, and to design the means to translate digital records into data that speak the language of quality.

Reforms to the NHS in England, currently underway, that involve devolution of budgets to local clinical groups, and a stronger role for regulators, cannot succeed in establishing accountability for the assurance of the quality and safety of care, nor the stewardship of very large amounts of tax-payers’ money, without credible data. Ambitious plans are in place to generate measures of quality. Ultimately, this will lead into the same territory as Meyer et al3 have described. The NHS will have to accept, in system-wide monitoring, the modern maxim that ‘less is more’.

References

View Abstract

Footnotes

  • Competing interests None.

  • Provenance and peer review Commissioned; internally peer reviewed.

Linked Articles