Article Text

Download PDFPDF

Refocusing quality measurement to best support quality improvement: local ownership of quality measurement by clinicians
  1. James Mountford1,
  2. Kaveh G Shojania2
  1. 1University College London Partners, London, UK
  2. 2Department of Medicine, University of Toronto, Toronto, Ontario, Canada
  1. Correspondence to Dr James Mountford, Director of Clinical Quality, UCL Partners, London, UK; james.mountford{at}uclpartners.com

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Recent years have seen unprecedented efforts to measure healthcare quality and to link such measurement to improved care delivery. The methodological and pragmatic complexities of these efforts have led to major debates: which ‘dimensions’ of quality to measure; whether to focus on processes or outcomes; which outcomes to prioritise—traditional clinical outcomes or more patient-centred ones; and, perhaps most important, how to link measurement to action through policy, professional and management levers.1

A variety of quality measurement schemes exist in many countries, including confidential reporting directed at healthcare organisations, public reporting of performance, policies tying performance to funding, such as ‘pay for performance’.2 3 Further, in some countries, professional training and/or licensing and revalidation processes for doctors include skills to measure and improve quality as core competencies.4–6 Moreover, public and governmental expectations for quality measurement have not just continued to rise but have expanded to include interest in long-term conditions, rather than the historical focus on short-term outcomes after surgery or hospitalisation for acute medical conditions.

The question thus arises: what approach to measuring and reporting quality will best equip health systems to address these needs, especially given ubiquitous fiscal constraints. In this commentary, we first outline five general categories of problems that have beset quality measurement efforts to date.

Historical shortcomings in healthcare quality measurement

Prioritising one type of measure

Much debate has focused on whether processes or outcomes constitute the ‘best’ quality measures. This debate has a long history7–9 but represents a false choice. Each element of Donabedian's triad of structure, process and outcome has both advantages and disadvantages,10 with no single category providing the best performance measurement across all settings and circumstances (see table 1). In healthcare, as in other industries, the players who achieve the best outcome pay the greatest attention to implementing reliable, effective and efficient processes of care and putting in place key elements of structure and measuring outcomes they achieve as a result.11

Table 1

Advantages and disadvantages of Donabedian's three categories of performance measures

Defect-focused

Quality measurement has often focused on a few easily measurable elements of care, often on what is being done ‘wrong’ rather than what is being done ‘right’. The most prominent measures in most systems have included mortality rates, hospital-acquired infection rates, surgical complications and ‘never events’.12 These are, of course, important elements of quality and safety and rightly form part of regulatory frameworks ensuring minimum standards for patients. Acting on them may save lives and prevent harm. However, focusing disproportionately on failure to achieve minimum standards and on what is not working has hindered quality measurement from gaining broad traction with clinicians.

Most clinicians do not go to work eager to avoid healthcare-associated infections. Rather, they seek to diagnose and manage problems within their field of practice. The goals of these activities are generally not captured by overarching outcomes (eg, mortality), complications that cut across specialties (eg, healthcare-associated infections, adverse drug events and ‘never events’) or adherence to minimum, evidence-based process measures. For instance, most cardiologists do not find terribly motivating the goal of not forgetting to prescribe appropriate antiplatelet agents, beta-blockers, and other proven therapies for the management of coronary artery disease. They likely derive even less motivation from the goal of avoiding nosocomial infections. While specialists may recognise the importance of such goals for the health system, they seem unlikely to take ownership of such cross-cutting measures that do not relate specifically to the goals of their clinical work.

“Do no harm” may be the first principle in medicine, but it is not the most inspiring one. Thus, many quality measures have failed to capture the imagination of clinicians or to create a sense of professional responsibility for performance measurement.

Limited relevance to most clinical specialties

Driven by the minimum standards approach, the clinical specialties that have taken quality measurement furthest tend to be those like cardiac surgery and intensive care, which have some clear and easily measured outcomes, including death and avoidable complications. For most specialties, such measures are of marginal relevance at best to routine practice. For instance, mortality rates thankfully have little relevance to most paediatric or obstetric care, never mind fields such as ophthalmology, rehabilitation medicine or palliative care.

Each specialty should be much more proactive in defining, developing and improving against the technical measures most relevant to that specialty. That said, even if professional organisations develop performance measures, these measures will serve little purpose without concomitant efforts to foster to local performance improvement initiatives with respect to these measures. Without such efforts, performance measures will suffer the fate of so many practice guidelines—developed and published with fanfare, but variably implemented in routine practice. Some professional organisations have led effective quality measurement and improvement initiatives.13 Such initiatives need to become more routine.

Where current evidence does not provide robust support for specific elements of care, specialties may need to identify high-performing systems of care and agree upon the likely ingredients for success. Improvements in cystic fibrosis care in the USA exemplify this approach. Over four decades, the American Cystic Fibrosis Foundation Patient Registry has evolved to include over 300 variables for some 26 000 patients, covering numerous aspects of treatment, including clinical and functional endpoints as well as patients' (or their parents') assessments of the quality of care received. Rigorous measurement of metrics relevant to patient outcomes underpins the marked increase in life expectancy achieved since the 1980s—from 27 years in 1989 to 36 years in 2009.14

Limited relevance to patients

Quality measures must also have direct relevance to patients' lives, including functional outcomes (eg, returning to work) as well as their experience of and satisfaction with the care they receive.15 16 In the UK, Patient Reported Outcome Measures (PROMs) and Patient Reported Experience Measures (PREMs) attempt to achieve this goal, and similar approaches are increasingly used in other systems, for example, Consumer Assessment of Healthcare Providers and Systems in the USA.17 18

Any specific measures of patients' assessments of their care, such as PROMs and PREMs, will undoubtedly generate debate.19 20 They will also require attention to technical issues, such as adjustment for case-mix21 and tailoring measures so that they best reflect what matters most to patients for a given condition. However, the general point remains that greater weight must be given to these dimensions of quality.

Episodic and fragmented

Quality measures have almost exclusively focused on assessing the performance of individual organisations on single episodes of care. Individual steps in care pathways—such as door-to-balloon time for acute myocardial infarction—are, of course, important: this measure is clearly linked to reductions in mortality and ongoing morbidity.22 But episodic measures can never completely describe the performance of a health system in a way that matters most to patients. Furthermore, quality, safety and inefficiency gaps often occur in transitions of care, both between and within organisations.23 Robust measurement systems that reflect performance related to care transitions will more accurately capture quality at the patient level. Such systems will also encourage the development of pathways of care, which are better coordinated across the many boundaries that exist in every health system.

The need for quality measures that span patients' journeys, as opposed to single episodes of care, is most apparent for long-term conditions. The best care for acute myocardial infarction is the care not needed because the acute event was averted by excellent ‘upstream’ care and prevention. In the USA, the creation of accountable care organisations spanning the continuum of care aims to improve the system's ability to take a whole pathway perspective by better aligning incentives and execution capabilities within one organisation.24 25

Moving forward: engaging clinicians and institutions in performance measurement and improvement agendas

The shortcomings outlined above do not have simple solutions. Legitimate debate should exist around which measures to choose in any given situation, how best to capture patients' journeys across different clinical settings, the technical merits and drawbacks of specific measures, and so on. However, little progress will occur until clinicians and institutions (hospitals, group practices, etc.) take ownership of these issues and play active roles in measuring their performance.

Across countries, most clinicians—especially physicians—and healthcare institutions have regarded performance measurement primarily as an administrative burden. They have seen measurement as a task with marginal relevance to patient care imposed by outside forces—insurers, regulators, government agencies—rather than viewing measurement as an important ingredient in improving patient care. However, the best performing healthcare organisations take local ownership of quality measurement, and they do so proactively, rather than reactively in response to external demands.11 26 For example, Intermountain Healthcare (Utah, USA) has over 200 performance measures in routine use, of which more than two-thirds were developed or adapted internally rather than imported from outside.11

A similar example from the UK is provided by University College London Partners (an academic health science system in North London and the home system of one of the current authors). Clinical academic leads across 11 clinical programmes are encouraged to convene clinical colleagues and patient representatives to co-create a working set of quality measures for each disease or pathway of interest. University College London Partners' stroke measures comprise 15 measures spanning clinical outcomes, functional outcomes (PROMs) and aspects of patients' experience of care delivery (PREMs). Collectively, these describe the essential features of excellent stroke care, from risk factor awareness and primary prevention through to rehabilitation and secondary prevention.27

The Veterans' Affairs (VA) health system in the USA provides an example of this commitment to performance measurement as a driver of improvement at the level of an entire health system. The VA health system has driven substantial improvements in quality and frontline clinical ownership of the quality agenda through system-wide processes to adopt and adapt local quality measures for use across the system, most notably in surgical outcomes through the National Surgical Quality Improvement Program.28–30 Intensive care units in the VA provide another example of this commitment to system-wide performance measurement.31 The commitment to performance measurement at system level has undoubtedly played a role in achieving the VA's documented level of superior care delivered across a range of conditions.32

From patient to system

When taking care of a patient, the routine of clinical assessment, formulating a diagnosis and instituting a management plan is second nature for clinicians, who then periodically check in to make sure that tests and investigations requested have been performed and consult the patient to see if they are improving. Similarly, for a health system, an initial assessment is made to reach a ‘diagnosis’ using various quality and productivity measures, then changes are initiated to address problems identified and as ‘treatment’ progresses, these measures are repeated—while also checking in with the ‘patient’. Just as the treatments serve no purpose if patients do not experience better health, so is the case with our systems of care.

“But, I don't have time for this. I'm too busy taking care of patients.”

Research over past decades has repeatedly shown that suboptimal systems of care hold back the delivery of effective care to patients.33 34 Thus, clinicians can no longer maintain that poorly organised systems are somebody else's problem. Clinicians who truly care about the outcomes of their patients must turn their attention to the systems of care in which they deliver patient care. Central to improving a system is defining and tracking those measures that best describe that system's performance.

Physicians must engage in quality measurement and improvement efforts proactively and must do so with real ownership. Failure to do so will perpetuate the current problems which directly impact the care of individual patients and will also lead to increasing imposition of a quality measurement agenda by external bodies. In fact, in this regard, clinicians have too often espoused contradictory excuses for not participating in activities related to quality measurement and improvement. On the one hand, they have claimed that they are too busy taking care of patients and that system issues are somebody else's problem. On the other hand, whenever someone else proposes an initiative to measure or improve quality, clinicians criticise it as clinically unsound.

The focus on healthcare quality in the past decade has so far been relatively kind to clinicians. Despite numerous studies documenting how often and how far patient care falls short of best practice, emphasis has mostly been placed on the need to improve the systems in which care is delivered, rather than on the competence or professionalism of clinicians. While systems certainly need improving, clinicians must also do their part in improving systems. Vigorous performance measurement plays a vital role in improving systems, and here clinicians must not drag their feet, grudgingly accepting or passively resisting metrics imposed from outside. Rather they must participate actively in the development of metrics, joining a dialogue with colleagues, patients, payors and regulators of healthcare to articulate how quality is best described for each clinical setting, service or pathway.

If clinicians do not respond to the growing calls to make the measurement of healthcare quality a core part of their professional activities and ethos, the perception of clinicians as victims of poorly designed systems may change to a less charitable one. Clinicians risk being seen as part of the problem and at that point it may be too late for them to lead (or even to participate meaningfully) in the development of performance metrics. They will simply have to play along with externally developed metrics, with all the problems these have entailed in the past. Most importantly, patients will not be well served if clinicians do not embrace the measurement and improvement of quality as a core part of their routine work.

Acknowledgments

KGS receives salary support from the Government of Canada Research Chairs Program.

References

Footnotes

  • Linked article 000380.

  • Competing interests None.

  • Provenance and peer review Commissioned; internally peer reviewed.

Linked Articles