Article Text

Download PDFPDF

Measuring what matters: refining our approach to quality indicators
Free
  1. Perla J Marang-van de Mheen1,
  2. Charles Vincent2
  1. 1 Department of Biomedical Data Sciences, Medical Decision Making, Leiden University Medical Center, Leiden, The Netherlands
  2. 2 Department of Experimental Psychology, University of Oxford, Oxford, UK
  1. Correspondence to Dr Perla J Marang-van de Mheen, Department of Biomedical Data Sciences, Medical Decision Making, Leiden University Medical Center, Leiden, 2333 ZA, The Netherlands; p.j.marang-van_de_mheen{at}lumc.nl

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Quality indicators are ubiquitous in healthcare and serve a variety of purposes for many different stakeholders. Few would question the value of monitoring the quality of care, but the increasing numbers of indicators and the resources consumed suggest that some reflection and refinement of approach may be required. For instance, the National Quality Forum catalogue in the USA lists 1167 indicators,1 and a recent study from the Netherlands showed that healthcare professionals from five clinical specialties collect data for 24 different stakeholders on 1380 different variables.2 Healthcare professionals in the latter study spent an average of 52 min per working day documenting for the wide range of required quality registrations, with only 36% of the indicators perceived as useful for improving the quality of care in daily practice.2

In this issue of BMJ Quality & Safety, Xu and colleagues report a study of the usefulness of nursing home indicators for assuring and improving quality of care.3 These indicators play a role in value-imbursement initiatives, and facility scores are publicly reported on Minnesota’s Nursing Home Report Card. This study is notable for focusing on the overall value of the set of indicators rather than the properties of individual indicators. The authors performed a qualitative assessment of the indicator set using literature review and expert opinion. They also examined correlations between indicators and examined the contribution of each indicator to the assessment of overall nursing home quality. They refined the indicator list, provided a clear domain structure and scoring system, making it much easier for users to understand what is being measured and how the summative assessment can be used to support decision-making. Their approach is analogous to that taken by the development of psychological tests, where the emphasis lies on carefully defining the underlying construct and developing a necessary and sufficient set of indicators to measure that construct.4

While individual quality indicators have been extensively studied, and much written about the criteria for a good indicator, considerably less attention has been devoted to the criteria and desirable characteristics of sets of indicators. The paper by Xu and colleagues shows that there is much to be gained by shifting the level of analysis to reflect on the underlying constructs and wider purpose of indicator sets. Rather than accumulating and aggregating multiple individual indicators, in the hope that they will meet the needs of users and health systems, we could endeavour to define the fundamental purpose of indicator sets and then choose relevant component indicators. In this editorial, we attempt to define a core set of questions that could help to shape and refine the core features of an indicator set.

Who will be using the indicator set?

Organisations that publish indicator sets acknowledge, and indeed advocate, that the indicators are used by different groups with different priorities. Minnesota, for instance, uses care home indicators to monitor and track quality of care, but also publishes a helpful guide for families choosing a home for their relative.5 Other groups who may review and use such indicators include clinical staff, managers of homes, administrators concerned with cost and efficiency, and researchers and others concerned with implementing and tracking quality and safety over time. While all these groups are broadly concerned with the quality of care provided, they may use indicators for very different purposes and for answering different questions.6 7 We therefore cannot assume that a particular indicator set will meet the needs for all users equally well.

What questions need to be answered by the indicator set?

Patients and their families may use publicly available quality indicators to inform their choice for a particular provider, but will also rely on other sources that they may well regard as more important than formal indicators. These include personal recommendations and experience, community reputation, reports in the media and the impressions gained during visits.8 Policymakers or insurance companies on the other hand may use quality indicators to inform purchasing decisions, enhance transparency and as a measure of overall health system performance. Providers may use quality indicators internally to monitor safety and to support their efforts to improve care; they may also benchmark their care in relation to similar institutions. All these users want to measure the quality of care, but each group faces different challenges and poses different questions. They may therefore differ in what they consider most relevant and which combination of indicators will most accurately measure that. An indicator set must therefore be developed with particular user groups in mind and may need to be adapted to meet the needs of different groups.

What is the underlying construct that is to be measured?

Quality indicators are generally intended to reflect the well-established quality domains of effectiveness, safety, efficiency, patient-centeredness, equity and timeliness, suggesting that the set of indicators will cover most of these dimensions.9 In fact, it may be necessary to measure a number of different constructs, reflecting different dimensions of quality of care. The relationships between these different constructs must also be considered; for instance, care may be efficient, but not equitable, or conversely equitable but not efficient. In practice, however, most organisations simply produce long lists of specific indicators (eg, on the US Care Compare website) with no indication of what construct these intend to measure, or whether the set is valid to coherently measure that construct. Similar sites in the Netherlands show the mandatory indicators collected by Dutch hospitals for the healthcare inspectorate, including a mixture of particular clinical processes (eg, medication or pain management), outcomes (eg, readmission) and care for specific patient groups (eg, the elderly) without being clear whether these are assessed in their own right or whether they are meant to reflect a particular construct.10 Defining the underlying construct is never going to be straightforward and will always lead to such questions as ‘what exactly do we mean by safety?’. However, this process will also greatly clarify the construct and ensure that each individual indicator does contribute to the overall measurement objective.

A recent systematic review confirms that most organisations give little attention to the validity or utility of indicator sets as opposed to the validity of individual indicators.11 Again, analogous to the development of psychological measures, Schang and colleagues assessed the content validity of sets of indicators by examining whether the indicators sufficiently covered the construct in question, whether different aspects of care were proportionately represented and whether the set contained irrelevant indicators. Only 15% of studies included in this systematic review addressed all three criteria, although the majority did examine aspects of content coverage, particularly the breadth of relevant content. Besides content validity, the review revealed four additional substantive criteria for construction of future indicator sets: cost of measurement, prioritisation of ‘essential’ indicators for the purpose of the assessment, avoidance of redundancy and size of the set. Additionally, four procedural criteria were identified: stakeholder involvement, using a conceptual framework, defining the purpose of measurement and transparency of the development process.

How many indicators do we need?

The assumption of most organisations developing and requiring indicators seems to be that the best way to ensure overall quality of care is to measure every possible aspect of care provided. An alternative approach would be to focus on specific underlying constructs of interest for a particular purpose, such as the safety or equity of care provided, and identify a set of indicators that would reliably and validly assess that construct. We would, in effect, be carrying out a targeted and focused sampling of the care provided. So rather than developing long lists of indicators to assess every aspect of care, we should ask how many indicators are needed to make a reasonable assessment of a specific underlying construct. If the indicator set covers multiple constructs (for instance, both safety, efficiency and equity), then we should look for redundancy within rather than across these domains, as we still need content validity for each of these constructs.

We believe it would be extremely useful to define a reasonable size for an indicator set in any particular setting. For instance, if the developers of a set were only allowed to have 10 indicators in their set, they would need to prioritise those indicators that best support their purpose.11 We are not suggesting that 10 indicators is an ideal number—this will obviously vary according to context—but that defining a target number will provide a valuable discipline and motivation for careful selection of the optimal set.

How often do we need to review the indicator set?

Safety in healthcare is a constantly moving target, which not only means that we regard an increasing number of events as (preventable) patient safety issues, but also that previously important problems or hazards may have successfully been addressed.12 The implication of this is that we may need to periodically assess whether the indicator set is still valid to measure the underlying construct, as some indicators may no longer be important while new priorities may have emerged because of innovations or improvements in care. It will remain important to ensure that dropping the indicator does not result in this becoming a problem again, using alternative methods such as inspection visits or taking samples, rather than continuous measurement through quality indicators in all facilities. Changes cannot of course be too frequent; otherwise, the adjustments to the set will become burdensome and tracking change over time will become more difficult.

We might also review the type of indicator used, particularly for rare events that need to be monitored continuously. For instance, we could consider monitoring the time between these rare events (using a G-chart rather than a p-chart) or using a funnel plot around the median to detect outlier performance rather than dichotomising time above or below a threshold, as recently shown to give additional opportunities for improvement of door-to-needle time.13

What is the cost of collecting, analysing and using this indicator set?

The purpose, broadly speaking, of all indicators is to monitor and hopefully improve the quality and value of care provided in the health system. Yet, any monitoring activity has a cost, both visible in the form of resources consumed in producing them and, largely invisible, in terms of staff time consumed in recording and submitting the information, unless it can be derived directly from electronic health records. Such costs are generally only considered once indicators have been put into practice, but the cost can be considerable as noted above.2 Although it would be difficult to assess in advance, it would be a useful discipline for those developing indicator sets to endeavour to specify in advance how much staff time, at different levels of the organisations concerned, should be devoted to reporting indicator information. The principle, for instance, of consuming no more than 1% of a provider organisation’s budget on indicator collection might do a great deal to focus indicator sets on issues that are of real value to multiple stakeholders.

Will using this indicator set have any unintended consequences?

Quality indicators may, as noted above, be used by different groups for different purposes which may lead to conflicts and also potentially undesirable consequences.14 For example, private insurers argued for drastic reform of Dutch emergency care using quality indicators they had formulated from examining clinical guidelines, trial results and systematic reviews.7 They proposed, on grounds of cost and efficiency, to centralise emergency care in a very small number of major centres. This proposal was strongly resisted by healthcare professionals on the grounds that it would reduce patient choice, availability of care, would redirect patient flows, change the hierarchy between specialties and have far-reaching consequences for other services.7 Patients who had a stroke, for instance, would also have to be treated in these major centres, rather than near their homes. In other words, equity and patient-centeredness were traded off in favour of safety and efficiency. We can see from this example that sets of quality indicators can never be a neutral assessment of care, but must always be considered from a particular vantage point of patients, clinicians, insurers or other parties.

Reflections and conclusion

The number of quality indicators has proliferated, without a parallel emphasis on understanding the relationships between indicators or their contribution to the overall objective of monitoring one or more aspects of the quality of care. Instead, the number of quality indicators has increasingly become a burden, rather than a useful tool to help us achieve safer and better care. We argue that a focus on the construct being measured and assessing the validity of a (limited) set of indicators is needed to achieve that aim. The call for parsimony and balance in quality indicators is not new, but suggested approaches have not been widely implemented.15 The paper by Xu and colleagues is a reminder that we should aim to define the target audience and identify the dimensions of care most relevant to them, and then develop sets of indicators tailored to measure those dimensions. This is a very different approach from simply creating more and more indicators covering every aspect of care and hoping that they will collectively amount to a useful measure of quality. When designing a questionnaire or developing a psychological test, we are careful not to overburden respondents and therefore measure only what matters to assess the underlying construct and meet the objectives of the study or programme. We hope that the above set of questions will support efforts to refine our approach to quality indicators, to focus on the overall objectives rather than the individual indicators and to measure what really matters to patients, professionals and policymakers.

Ethics statements

Patient consent for publication

Ethics approval

Not applicable.

References

Footnotes

  • Contributors PJM-vdM and CV both contributed to conception of the paper; they both critically read and modified subsequent drafts and approved the final version. PJM-vdM is editor at BMJ Quality & Safety.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Provenance and peer review Commissioned; internally peer reviewed.

Linked Articles