Article Text

Download PDFPDF

Are increases in emergency use and hospitalisation always a bad thing? Reflections on unintended consequences and apparent backfires
Free
  1. Kaveh G Shojania
  1. Faculty of Medicine, University of Toronto, Toronto, ON M5S 1A1, Canada
  1. Correspondence to Dr Kaveh G Shojania, Faculty of Medicine, University of Toronto, Toronto, ON M5S 1A1, Canada; kaveh.shojania{at}sunnybrook.ca

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Verschlimmbessern: German word meaning to make something worse in an effort to improve it

In this issue of BMJ Quality & Safety, Snooks et al 1 report a stepped-wedge trial involving 32 general practices in Wales. A web-based software program presented clinicians with estimates of patients’ risk of future emergency attendance on the basis of clinical characteristics, past health services use and socioeconomic factors. Clinicians could then develop management plans which would avoid acute deteriorations necessitating emergency department attendance. Surprisingly, the intervention caused a small but statistically significant increase in hospital admissions and use of other National Health Service services.

The authors deserve congratulations here because they undertook this evaluation precisely because policies intended to improve care or reduce costs often presume the effectiveness of a certain approach when little evidence exists to support it. Targeting high-cost users of healthcare is a widely recommend approach which is harder to execute than generally recognised.2 Identifying hospitalised patients at high risk for readmission also represents a topic where predicting risk comes easily but acting effectively does not.3 But, Snooks and colleagues have shown that not only is it hard to achieve an ambitious goal, sometimes an intervention which ought to work makes things worse.

That said, increases in the use of emergency services do not imply worse quality of care. Patients who made unplanned visits to the hospital may well have had acute issues not easily dealt with in an outpatient setting. Before returning to that question, we will briefly review unintended consequences in general, followed by a focus on the subset of such situations where a change produces the opposite of its intended goal.

Unintended consequences outside healthcare

Sociologist Robert Merton famously drew attention to the problem of unexpected, undesirable effects with his classic essay, ‘The Unanticipated Consequences of Purposive Social Action’.4 It may seem obvious now that change efforts, whether in whole societies or even single institutions, may not only fail to achieve their aims but sometimes backfire. Yet founders of the field fully expected sociology to become a predictive science on par with physics—‘savoir pour prevoir, prevoir pour pouvoir’ (know in order to predict, predict in order to act) in the words of 19th century French philosopher Auguste Comte.

Even without resorting to specific changes in the views of sociologists since the optimistic positivism of the 19th century, the need for humility when it comes to prediction has long been recognised. ‘It’s difficult to make predictions, especially about the future.’ The wide range of persons to whom this wry remark has been attributed, from Nostradamus through to Mark Twain, Niels Bohr and Yogi Berra,5 attests to the long-standing recognition of our limited ability to anticipate the consequences of certain kinds of actions—in particular actions in complex social systems, as opposed to mechanical ones.

Case studies of the ‘Law of Unintended Consequences’ can fill a book.6 The topic of introducing species not native to the local ecosystem would generate a volume of its own. With ecosystems, just as with societies and large organisations, apparently simple changes to one component can produce unanticipated effects elsewhere. The work of sociologist Charles Perrow, through his Normal Accidents theory,7 highlighted how complex systems can become opaque even to expert operators. This type of complexity, when combined with ‘tight coupling’ between system elements, makes unintended consequences, including catastrophic accidents, all but inevitable. However, in examples such as the intervention evaluated by Snooks and colleagues,1 the issue does not arise from unexpected interactions among components of complex systems. Here, something about the intervention itself caused it to increase rather than decrease use of emergency services. So, let us move on to discussing these sorts of backfires—first using examples from outside healthcare quality improvement (table 1).

Table 1

‘Backfire examples’ from outside healthcare quality improvement

From Barbara Streisand to cobras and compensatory risk: backfire effects outside healthcare quality improvement

Backfire effects occur commonly enough that several eponymous names exist for them. The ‘Streisand effect’8 refers to instances where efforts to suppress information promote its spread. Barbara Streisand wanted pictures of her Malibu home removed from a website where they had been incidentally collected among aerial photos documenting coastal erosion. Prior to her taking legal action, only six visitors to the website had downloaded the images of concern. The ensuing press coverage led to over 400 000 visits the following month alone.

The ‘cobra effect’ involves the creation of perverse incentives. The British colonial government in 19th century New Delhi supposedly introduced a reward for dead cobras to curb the problem of venomous bites. The reward programme created a market for breeding cobras so people could turn them in for bounty. This might merely have constituted a failed programme except that abolishing the counterproductive reward resulted in breeders releasing their now worthless cobras. A similar example of a reward programme for rats in early 20th century Hanoi is better documented,9 and efforts to control the wild pig population in Fort Benning, Georgia, provide a relatively recent example.10

That poorly designed financial rewards can create perverse incentives hardly comes as a surprise. But economists have pointed to other less obvious backfires. The Peltzman effect11 describes increases in risky behaviours from making some situation safer—for instance, building safer cars may encourage more reckless driving. Peltzman probably overestimated so-called ‘compensatory risk’ in the specific case of automobile safety regulations, but other documented examples exist. Compensatory risk may also explain why injuries exact a heavier toll in American football than they do in rugby, despite football players wearing protective equipment, playing shorter games and tackling less often.12 Clad in protective helmets and pads, players may feel safer hurtling themselves at opponents than they do when bodies will more directly collide.

Unintended consequences in quality improvement: from the predictable to the perverse

Quality improvement reports often use ‘balancing measures’13 to monitor predictable unintended consequences—for instance, tracking readmission rates in a project aimed at reducing length of stay (table 2). The literature on unintended consequences of performance measures constitutes a genre in itself,14–17 with many examples of Goodhart’s law—when a measure becomes a target, it ceases to be a good measure.18 19 In addition, proliferation of measures from different external groups can cause measurement fatigue.20 And, of course, health information technology furnishes numerous examples of unintended consequences on workflow, morale, how clinicians interact with patients and new types of errors (table 2).21–28

Table 2

Predictable and less predictable undesirable effects of improvement interventions in healthcare

Some might regard all of the ‘unintended consequences’ in table 2 as predictable. Maybe it seems obvious that the inconvenience of donning gowns, masks and gloves will make doctors and nurses less likely to enter the rooms of patients isolated for infection control and thus increase adverse events.29 But, some examples surely come as a surprise. For instance, ‘intentional rounding,’ where nurses check in frequently (eg, hourly) with every patient using a standardised protocol, may increase patient satisfaction, as well as reducing patient falls and call light use.30 The wide variation in purpose and execution of this practice31 and the mixed evidence supporting it30 will not surprise seasoned consumers of the literature. It probably does surprise, though, to learn that one study reported a perverse increase in call light usage. Rounding every hour created for some patients the worry that nurses might not come back for quite some time, so they used their call lights more frequently than before.32

Backfires in quality improvement

That last example brings us to the species of unintended consequence where it is not some tangential undesirable effect—like increasing call light usage when the primary interest lay with reducing falls—but the exact opposite of the intended improvement (table 3). I have not shown examples where the ‘backfire effect’ resulted from an implementation issue incidental to the intervention. For instance, in a multisite study of medication reconciliation, some sites saw temporary increases in medication discrepancies due to problems arising from concomitant implementation of new electronic health record systems.33 Medication reconciliation bears no intrinsic relationship to electronic health records. Moreover, examples of medication reconciliation facilitated by electronic systems exist.34 Worsening the improvement target due to implementation problems differs from the examples in table 3, where intrinsic features of the intervention seem to have caused the backfired result.

Table 3

Apparent backfires in quality improvement—worsening what the intervention aimed to improve

Many examples of apparent backfires in quality improvement (table 3) arise in contexts where interventions expose health professionals to new information involving risk—the probability of an outpatient needing to visit the emergency department in the coming year, as in the study by Snooks and colleagues.1 In another example, pharmacists visited older patients in their homes to assess patients’ understanding of and adherence to their medications, identify the need for medication adherence aids, report possible drug reactions or interactions to general practitioners and remove out-of-date drugs.35 This intervention targeted two well-known causes of hospitalisation among older patients, namely adverse drug reactions36 and medication non-adherence.37 Yet, this reasonably conceived and executed intervention produced a highly significant 30% increase in the rate of readmission (p=0.009).

One particularly instructive example is an intervention in the US Veterans Affairs system,38 which enrolled hospitalised medical patients who had chronic conditions frequently associated with hospitalisation and randomised roughly half to intensive primary care support, beginning with visits from clinic staff during the hospital stay to assess their postdischarge needs. The intervention achieved high fidelity in so far as 93% of intervention patients visited the clinic at least once compared with 77% of controls (p<0.001). And, intervention patients attended their primary care clinics a mean of 3.7 times vs 2.2 visits among controls (p<0.001). Yet, overall, the intervention achieved the opposite of the intended result, with a higher monthly readmission rate (0.19 vs 0.14, p=0.005) and more days of rehospitalisation (10.2 vs 8.8, p=0.041). In discussing these unanticipated results, Weinberger et al 38 point out that the intervention selected patients at high risk for readmission in the first place. Moreover, ‘the primary care offered to these seriously ill patients may have led to the detection and treatment of previously undetected medical problems.’

A longer follow-up period might well have produced the desired result of decreased hospitalisations.38 Over time, primary care physicians would evolve new strategies for supporting this high-risk group of patients. Yet, the pool of underuse is often of comparable magnitude to that of overuse,39 40 just as errors of omission are probably at least as common as errors of commission.41 It is not hard to imagine that, even as outpatient providers become more comfortable or skilled managing higher risk patients without sending them to hospital, such interventions uncover other high-risk patients living in ‘benign neglect’. Once identified, some of them will inevitably be referred to the hospital, and net use of acute care services may not only fail to decrease, but actually increase. In this sense, a programme that appears on the face of it to have failed (eg, increased rather than decreased admissions) may in fact have made things better for patients themselves.

Healthcare professionals of all types err on the side of caution. When interventions highlight unfamiliar types of risk, we can expect health professionals to put safety first and err on the side of sending some patients to the hospital. But, we can keep this possibility in mind when designing such interventions. An intervention highlighting new types of risk to clinicians may initially increase use of health services. Having this expectation from the outset may avoid abandoning interventions which might well achieve their intended goals if kept in place for longer.

References

Footnotes

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Patient consent for publication Not required.

  • Provenance and peer review Commissioned; internally peer reviewed.

Linked Articles