Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
There is wide recognition that promoting healthcare value involves decreasing ‘low-value’ services—care without clinical benefit, little benefit compared with cost or disproportionate potential harm.1 While low-value care has been presumed to be a problem predominantly in the USA in the context of an expensive, fragmented, multipayer, fee-for-service system, recent evidence suggests low-value services are pervasive even in government-funded healthcare systems with universal coverage and interoperability.2 Accordingly, low-value care is garnering attention across the globe.3
In response, policymakers, insurers and individual healthcare systems must work together to create and track measures of low-value care. In the USA, a number of states have begun to use such measures to characterise low-value care delivered by healthcare provider organisations.4–6 Many of the existing measures have been derived from the national Choosing Wisely campaign7 with examples such as cervical cancer screening in women >65 years, preoperative testing in asymptomatic patients undergoing low-risk surgical procedures and diagnostic imaging for uncomplicated headache.8 More measures are likely to emerge amid the proliferation of value-based payment and care delivery reforms.
While measuring low-value care is laudable and necessary, it is also challenging. Widely available data sources, such as claims, imperfectly capture clinical appropriateness of specific services. Measures should be valid and clearly define which facet(s) of value are being captured, and for which stakeholders. Engagement and collaboration between insurers and clinicians are needed to meaningfully implement these measures. Measures could create unintended consequences by prompting clinicians to focus disproportionately on measured services to the detriment of other aspects of care or select diagnostic coding aligned with a desired outcome. For example, a low-value care measure dissuading antibiotic prescribing in patients with acute bronchitis could drive clinicians to code more diagnoses as ‘upper respiratory tract infection’ rather than ‘acute bronchitis’, while not altering frequency of antibiotic prescribing.
While some growing pains are inevitable, policymakers and clinical leaders have the opportunity to maximise the benefits of low-value care measures by learning from efforts to measure healthcare quality. Despite a number of differences, the concepts of low-value and high-quality care are fundamentally connected via the notion of desirability: in particular the desirability of reducing the former and increasing the latter. Both value and quality are multifaceted concepts that require multiple measures to help characterise. Additionally, there are several parallels between the efforts to measure quality and low value. Therefore, three lessons from decade-long experience measuring quality in the USA—related to both pitfalls to avoid and opportunities to pursue—are salient to measuring low-value care. While we use examples and experience within the US healthcare system, the lessons learnt may be translatable to approaching low-value care in other healthcare systems as well.
As quality measures became more available and accessible, clinicians and payers in the USA embraced them as a tool for providing actionable feedback and driving improvement.9 However, these efforts often inadvertently promulgated measures that were statistically unreliable for their intended use, that is, computed using such small sample sizes that they captured too much random variation (‘noise’) to draw fair conclusions about actual clinical performance (‘signal’).
Consider performance on a widely used survey in the USA to assess patient satisfaction, the Consumer Assessment of Healthcare Providers and Systems (CAHPS) survey. Statistically, at least 255 survey responses are needed to achieve a reliability of 0.9 (ie, 90% of measure performance is explained by true clinical performance rather than statistical noise).10 Yet, the recommended minimum sample size for reporting clinician-level CAHPS performance is only 50 surveys,11 and many clinicians may accrue even fewer.
This problem is exacerbated by the fact that most clinicians and organisations care for patients covered by different insurers. Many insurers calculate quality performance by specific product or plan, thereby reducing the sample size used to judge quality for individual clinicians or practices. Even for large nationwide programmes such as Medicare fee-for-service, only practices with at least 50 clinicians (representing a minority of practices nationwide) are able to reliably detect a 10% relative difference in quality measure performance.12 The result—inappropriately holding clinicians or groups accountable for quality performance on the insurer or insurer product level—has led to calls to prioritise reliability in quality measurement.13
Without attending to reliability, low-value care measures will face similar, if not more intense, challenges. While low-value care services in aggregate have substantial consequences for patient care, specific low-value services may be infrequent events for some clinicians and settings, amplifying the small sample size issue and decreasing the likelihood that they will be reliable at the clinician, small group or individual insurer levels. Policymakers and clinicians should collaborate to create transparency around reliability of low-value care measures while pursuing multipayer initiatives that can engage and hold clinicians accountable for low-value care using ample sample sizes. Such efforts would be aided by establishing bodies that steward the creation (eg, analogous to the National Committee for Quality Assurance) or evaluation (eg, analogous to the National Quality Forum) of low-value care measures.
The commendable intention to quantify quality has given rise to countless measures currently used by insurers in the USA, many of which have not been vetted for validity and reliability. Unfortunately, measure proliferation leads to well-known information overload and administrative burdens for clinicians and health systems.14
Like quality, low-value care is a multidimensional concept and therefore vulnerable to quantification via excessive numbers of measures. Given that low-value care measures have not yet been taken up nationwide, policymakers and clinical leaders have an opportunity to work with insurers and other stakeholders to proactively guide the number of measures used for performance measurement. In particular, decision-makers could promote measure parsimony by engaging stakeholders and focusing on validity, reliability and clinical relevance to build consensus about a common consolidated measure set. The success of this approach will also depend on linking selected measures to embedded clinical care processes.
Alignment with financial incentives
To strengthen clinician engagement in quality improvement, quality measure performance in the USA has been increasingly linked to financial incentives—first through payment arrangements such as pay for reporting and pay for performance, and now through value-based payment models such as bundled payments and accountable care organisations. While well intentioned, such efforts have led to policies that are susceptible to unintended consequences.
A salient example is the Centers for Medicare and Medicaid Services Hospital Readmission Reduction Program, a policy that uses the spectre of financial penalties to reduce readmission rates as a measure of quality. Despite early evidence of policy impact, recent evidence suggests that (A) over half of the reductions in readmissions are attributable to changes in risk coding instead of clinical process changes,15 and (B) simultaneous increases in hospital observation stays may be due to hospitals shunting patients towards ‘observation’ rather than ‘inpatient’ status.16 These factors have prompted debate about how to reform the policy and deter behaviours that improve measures with a neutral or negative impact on care.
While every policy is potentially susceptible to unintended consequences, policymakers and clinical leaders should heed caution when seeking to tie low-value care measures to financial incentives, learning from experiences in quality measurement. Unlike quality measures, which often encourage clinicians to ‘do more’ (eg, increasing pneumococcal vaccination), low-value care is defined by a focus on ‘doing less’. By deterring inappropriate care, the combination of low-value care measures and financial incentives may reduce the likelihood that patients receive appropriate care. This dynamic is particularly concerning for care for which there are marked healthcare disparities.
One potential solution for ensuring appropriateness would be to pair low-value care measures with counterbalancing quality measures before linking performance to financial incentives. For example, by itself, a low-value care measure seeking to deter inappropriate imaging for uncomplicated low back pain might unintentionally prompt primary care clinicians to refer patients more frequently to orthopaedic surgery, risking increased costs and decreased access to specialty services. Pairing the low-value care measure with a balancing measure of referral rates could help monitor and correct for the unintended effect.
Policymakers, clinical leaders and other stakeholders must ensure that steps to adopt low-value care measures are accompanied by focuses on a parsimonious set of robust measures that are thoughtfully aligned to financial incentives and payment models. As this occurs, clinicians and health systems must avoid using unreliable measures, mitigate information overload and administrative burden from excessive numbers of measures and monitor carefully for unintended consequences to ensure that low-value care measures do not become the very thing they seek to deter: interventions that do more harm than good.
Twitter @marcottl, @JoshuaLiaoMD
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Patient consent for publication Not required.
Provenance and peer review Not commissioned; externally peer reviewed.
Data availability statement There are no data in this work.