Article Text

Download PDFPDF

A nudge towards increased experimentation to more rapidly improve healthcare
Free
  1. Allison H Oakes1,2,
  2. Mitesh S Patel1,2,3,4
  1. 1 Crescenz Veterans Affairs Medical Center, Philadelphia, Pennsylvania, USA
  2. 2 Penn Medicine Nudge Unit, University of Pennsylvania, Philadelphia, Pennsylvania, USA
  3. 3 Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
  4. 4 Wharton School, University of Pennsylvania, Philadelphia, Pennsylvania, USA
  1. Correspondence to Dr Mitesh S Patel, University of Pennsylvania, Philadelphia, PA 19104, USA; mpatel{at}pennmedicine.upenn.edu

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

In any healthcare setting, the quality of care depends on the effectiveness of a given treatment, and on the way that the treatment is delivered. The complexities of modern healthcare have created gaps in our ability to consistently deliver the most effective and efficient care. As a result, significant undertreatment and overtreatment co-occur.1–3 This reality has led diverse stakeholders to overhaul the environment, context and systems in which healthcare professionals practice. However, while well intentioned, most ‘advances’ in healthcare delivery rely on untested or poorly tested interventions.4 5 This means that effective interventions don’t scale as fast as they should and that ineffective interventions persist despite providing no benefit. The current status quo presents an opportunity improve the delivery of care through a more systematic approach.

Successful innovation requires experimentation. Embedded research teams around the world have started to systematically test the impact of using subtle changes to the way information is framed or choices are offered to nudge medical decision making.6 7 The trial by Schmidtke demonstrates the feasibility and necessity of rapid-cycle, randomised testing within a healthcare system.8 The authors randomly assigned 7540 front-line staff to either receive a standard letter reminding them of influenza vaccination or one of three letters that used insights from behavioural economics to try and better nudge healthcare workers through different ways of framing social norms. Despite this effort, they found that all four arms had the same vaccination rate of 43%, meaning none of the social norm interventions led to meaningful changes in behaviour. All too often, policies and programmes that ‘make sense’ have been implemented without any kind of formal evaluation. In the Schmidtke trial, however, the rigorous study design allowed researchers to quickly and decisively conclude that the social norms letters were no better than a simple reminder letter. In turn, system leaders were able make an informed choice about whether to iterate until the intervention proved successful, or to abandon the intervention in favour of other competing quality improvement initiatives. Innovation is non-linear but experimentation is efficient. Through other trials, we have learnt that nudge interventions can improve medical decision making in a wide range of behaviours such as generic prescribing, cancer screening and imaging tests at the end of life.9–12 This growing area of research provides a road map for change.

Randomised experiments that occur within a healthcare system are often perceived as falling into a grey area between quality improvement and research.5 13 14 This creates scientific, political, logistical and ethical challenges. Humans are inherently uncomfortable with randomisation—people frequently rate A/B tests designed to establish the comparative effectiveness of two policies or treatments as inappropriate even when universally implementing either A or B, untested, is seen as acceptable.15 Rigorous embedded research is only possible to the extent that it reconciles the sometimes competing needs of the research team, service managers and system leaders. Different levels and types of randomisation have trade-offs. Do you randomise a lab test, the patient, the physician, a provider organisation or an entire hospital? Is the control group the standard of care or something else? Pragmatic study designs can be used to ‘naturally’ randomise people or groups. For example, a stepped-wedge cluster randomised clinical trial within a network of 5 radiation oncology practices was used to test the effectiveness of introducing a default imaging order in the electronic health record (EHR) to reduce unnecessary daily imaging during palliative radiotherapy.12 Even in the absence of a formal evaluation, this policy likely would have been scaled in a similar way.

Randomised experiments have an uncanny ability to produce surprising, but credible, results. Just as in clinical care, ‘health policy reversals’ occur when a new, more rigorous evaluation, contradicts current practice.16 Without a rigorous study design, findings that conflict with our a priori hypotheses are selectively discounted and discarded in favour of dogma. No matter the result, randomised trials advance our understanding about how best to improve our healthcare system. The movement around price transparency illustrates this tension. Despite inconsistent evidence about its effectiveness, price transparency has gained significant legislative traction in USA. While ‘shoppable healthcare’ is an appealing idea, simply providing price information to physicians and patients is unlikely to achieve the type of large-scale change that its proponents expect.17 18 To this point, a randomised trial tested the effect of displaying Medicare allowable fees on ordering of inpatient laboratory tests.19 This trial took place at three different hospitals, included 60 of the most expensive and frequently ordered inpatient laboratory tests, and included 98 529 patients. In the main analysis, there was no significant overall change between the price transparency group and the control group. In a subanalysis, it was found that displaying prices for more expensive tests led to a small though significant decline in test ordering, but this was offset by increases in the ordering of less expensive tests. Careful evaluation of intended and unintended consequences is essential to optimising nudge interventions. Because the price transparency intervention was bounded within the context of an experiment it was easy for the health system to sunset; had it worked, it could have been quickly scaled. Future price transparency interventions will likely need to be better targeted, framed or combined with other approaches. Rapid, small-scale, randomised experiments are the only way to effectively prescribe and ‘deprescribe’ healthcare interventions.

Nudges are not one size fits all.20 For this reason, we need to design multifactorial experiments that carefully consider and examine the mechanisms that underlie a nudge intervention, rather than simply testing if it works.21–24 Randomised trials of prescribing behaviour demonstrate the extent to which seemingly similar nudges can drastically differ in their effect. In a 2014 study, a health system changed the EHR default medication list from displaying brand and generic medications to only generic medications, with the ability for clinicians to opt out. This intervention was associated with a 5% increase in generic prescribing. This is a clinically meaningful, positive result. However, in a follow-up study the researchers made subtle changes to the design that increased the impact of the nudge. Instead of changing the default medication list, they added an opt-out checkbox labelled ‘dispense as written’ to the final EHR prescription screen. If left unchecked, the generic-equivalent medication was automatically prescribed. This intervention increased generic prescribing throughout the health system from 75% to 98%. These EHR interventions were equally inexpensive to implement, but one was far more effective than the other.9 10 25 In a world of limited resources and unending initiative fatigue, it important to focus our efforts on trials that are both theoretically compelling and sufficiently powerful. Intervention ‘strength’ exists on a continuum.20 Researchers and healthcare decision makers have to use their expertise and judgement to decide how aggressively to intervene on harmful behaviours. While hindsight is often 20/20, the Schmidtke team deployed an interesting but relatively weak social intervention. Given the negative externalities associated with suboptimal healthcare worker vaccination and the known barriers to changing vaccination behaviour, the researchers would have been justified in experimenting with a more forceful intervention or policy. Research designs that allow us to learn why and when different nudge strategies work are a requirement of purposeful innovation.

There are a number of design choices that can improve the feasibility and impact of rapid-cycle, randomised trials on healthcare delivery.4 14 First, we must embed research teams within health systems in order to create the capacity for this kind of work. Expertise is required to identify a promising intervention, design the conceptual approach, conduct the technical implementation and rigorously evaluate the trial. These teams are also able to design interventions within the context of existing workflows in order to ensure that successful projects can be quickly scaled and that ineffective initiatives can be seamlessly terminated. Second, we must take advantage of existing data systems. The field of healthcare is ripe with detailed and reliable administrative data and electronic medical record data. These data offer the potential to do high-quality, low-cost, rapid trials. Third, we must measure a wide range of meaningful outcomes. We should examine the effect of interventions on healthcare costs, healthcare utilisation and health outcomes. In addition, we need to carefully consider and test for potential spillover effects or unintended consequences. Fourth, we must design randomised studies that consider the mechanism of action—it is not enough to know that an intervention works. A mechanistic approach to system redesign will allow us to deploy efficient interventions that nudge enough, but not too much. Finally, we need to adopt the principles of registering, prespecifying and disclosing from the world of medicine. The sustained publication of rapid-cycle, randomised interventions, whether positive or null, is needed to create synergies and avoid duplication of efforts across different healthcare systems.

We owe it to our patients to deliver better, evidence-based care. Meaningful innovation will require a commitment to experimentation. Luckily, the complex world of healthcare provides endless opportunities for rapid-cycle, randomised trials that target healthcare costs and outcomes.

References

Footnotes

  • Twitter @oakes_ah, @miteshspatel

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests AO is supported by the Department of Veterans Affairs Advanced Fellowship Program in HSR&D. MP is supported by a career development award from the Department of Veterans Affairs HSR&D. MP is founder of Catalyst Health, a technology and behaviour change consulting firm. MP also has received research funding from Deloitte, which is not related to the work described in this manuscript. No other funding or disclosures were reported.

  • Patient consent for publication Not required.

  • Provenance and peer review Commissioned; internally peer reviewed.

Linked Articles