Article Text

Download PDFPDF

Are ‘hybrid’ interventions inherently self-sabotaging?
  1. Penelope Hawe
  1. Faculty of Medicine and Health, Menzies Centre for Health Policy and Economics, The University of Sydney, Sydney, New South Wales, Australia
  1. Correspondence to Prof Penelope Hawe; penny.hawe{at}sydney.edu.au

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

In this issue of BMJ Quality & Safety, Hampton and colleagues report a process evaluation of an intervention trial intended to encourage older patients’ involvement in their hospital care.1

The logic of the intervention, Your Care Needs You (YCNY), was that more patient involvement in aspects of care in hospital will carry over to home after discharge, preventing avoidable repeat admissions. YCNY was described as a ‘hybrid’ intervention. Ward-level staff were obliged to deliver ‘fixed’ components—a booklet, and advice sheet and a video. But they were also invited to design and deliver ‘flexible’ components, that is, any other components that the ward team thought would also encourage patients to take part in the selected aspects of their care (some examples were offered by the investigators). One of their eight wards went all in, embracing the challenge of designing flexible components. But the others chose differently, keeping with the fixed components only. Overwhelmingly, these became ‘taskified’, that is, delivered in a perfunctory way.

The authors concluded that hybrid intervention research seems to be at a ‘crossroads’ and it is hard for already pressured staff to do the creative work required for the flexible components. That observation fits their data. But engagement with the fixed components was also lacklustre. Maybe, the received message of ward staff was simply, these components you have to do, but these other ones you don’t. Or maybe, some components are important and these others are less so. If so, then their results are not surprising. Perhaps hybrid interventions are destined to be problematic. Although the findings of the YCNY trial are yet to be fully reported, what has been reported so far resonates with many of us trying to make interventions more effective. It is timely, therefore, to explore the logic of hybrid interventions.

Mixed and contradictory notions about core, flexible, fixed and adaptable components have developed over time

Frustratingly, terms are used interchangeably but also differently. In essence, two types of intervention structures have been pursued. To some researchers, ‘core’ components are fixed or unchanging in different sites, but they may be accompanied by ‘flexible’ components that are allowed to vary from place to place. Fidelity (integrity) is defined by adherence to the delivery of the core components. This idea was first endorsed in 2000 in the Medical Research Council (MRC) guidance on complex interventions. It referred to ‘constant’ (core or fixed) and ‘variable’ (or adaptable) components.2

To other researchers, the form of the component is not fixed. It can vary from site to site (adapt) while maintaining fidelity to the function the components play in the intervention theory or hypothesised change process.3–5 All components are ‘core’ (ie, essential) and all are permitted to adapt if necessary, as part of a codesign process with the sites. If components adhere to the same function in different settings/sites then the integrity of the intervention is preserved.3–5 The advantage of this is that context-level adaptation is considered at the efficacy trial stage, creating at the outset (in theory at least) interventions which are demonstrably transportable from place to place. The new MRC guidance on complex interventions embraces this idea of functional fidelity.6 It then invites researchers to decide whether variation is permitted or prohibited on particular components, if researchers are still inclined to keep some components fixed in form. Damschroder and colleagues suggest that the distinction between core and adaptable components may only be discerned over time by trial and error in multiple contexts.7

Unfortunately, however, some of the language in use now makes it even more clear that the continuation of components with different status demands better a priori justification. Butler and colleagues use the term ‘discretionary’ when referring to additional or flexible components, a term which implies a less vital role in the change process.8 The Consolidated Framework for Implementation Research uses the term ‘adaptable periphery components’ which may inadvertently carry a similar marginal connotation.7 Essentially, researchers using this way of thinking have to be comfortable with saying to ward staff that they may deliver things which researchers (currently at least) deem not important to the theory or mechanism of change. In contrast, others argue that all the components are part of the cause or mechanism.3 4 If they are not, then they perhaps have no business being there.9

At best then, hybrid interventions send a deflating message about the value of practitioner-led thinking. But at worst, hybrid interventions may be self-sabotaging.

Adaptable components harness practitioner agency and creativity

Hybrid interventions are uniquely comprised mechanisms of action from opposite ends of Greenhalgh and colleagues’ innovation theory spectrum.10 The fixed components are underpinned by a managerial mechanism of action (ie, specific, orderly, planned and make-it-happen). This is tested alongside a more social and emergent mechanism of action that underpins the practitioner-led components (ie, unpredictable, self-organising and let-it-happen).10 It is not unusual to see reports of interventions unfolding successfully with a managerial mechanism of action (such as monitoring systems for community-based prevention).11 But complexity-harnessing interventions that minimally prescribe the process and provide maximum feedback to allow review and adjustment are successful, too.12 Indeed, there is strong direct evidence that fostering reinvention across sites and allowing practitioners to modify the interventions to suit their needs means practices are more likely to be adopted.10

This means that a hybrid intervention undermines the very actions that might lead to effective and sustained problem-solving by passing off the so-called flexible or periphery elements as optional.

How this has come about is puzzling. Maybe, Greenhalgh’s seminal work on diffusion of innovation has been taken up in different ways because it is hard to shake the different explanatory schema that researchers automatically bring to what they observe. Greenhalgh’s team spoke about complex innovations having ‘fuzzy boundaries’. Interventions were conceptualised as having a ‘hard core’ (‘irreducible elements of the innovation itself’) and a ‘soft periphery’ (the organisational structures and systems for full implementation).10 Soft periphery elements were listed as part of the assimilation process. They were not listed as part of the innovation.

However, to some researchers, elements are components to be delivered and counted, wherever they appear in a hypothesised change process. To others, elements might be capacities and processes to be identified and coached or better rewarded. In any event in network theory, a core-periphery structure does not necessarily equate to weakness at the periphery. Actors on the periphery just hold a different type of power to those in the centre.13 Peripheral actors connect central actors to novel resources (material, social, emotional, informational).13 These may prove make-or-break when it comes to adoption of innovation.

We are still learning what interventions (really) are

Intervention design requires deep reflection and what Greenhalgh calls ‘epistemological labour’. For example, components may have hidden and multiple functions. Resources designed to increase and distribute knowledge can have other (more) important roles, for example, to build relationships between staff and patients.14 The proportion of staff trained to deliver an intervention can be a better predictor of the outcomes than time spent by staff on the intervention delivery.15 In other words, complex interventions act through multistrand pathways of change. This means that conventional measures of the dose of intended components can fail to detect how change occurs. Intervention logic must be interrogated at the outset. Components must also be fully observed to see how they function in practice and from place to place. Replication studies have been fruitful.5

What gets theorised matters too. Individual behaviour theory tends to dominate intervention design.16 A more logical starting point would be to get to know the context or system into which the intervention is to be introduced and how the problem is recurrently produced by that system. Theories about settings and system dynamics can be drawn on to determine what functions need to be enhanced, extinguished or introduced by the intervention’s components or strategies. Activity setting theory, for example, identifies the number of roles in the setting (roles like leadership, supervision and feedback) and how they are distributed.17 Ecological systems theory invites consideration of the available resources/capacity (people, time, materials, skills) including whether these are sufficient to ‘couple’ with the intervention.18

Conclusion

All intervention components should matter and their functions or roles in the local context/system need to be theorised in the change process. We cannot predict in advance what the most important components will be. It will vary in each site and depend on complex interaction dynamics. Studies like those of Hampton and her colleagues, with their extensive investment in ethnographic methods, are therefore vital for increasing our understanding.1 Indeed, a wide lens is critical to understanding intervention complexity and to counter the popular tendency to narrow the gaze to ‘barriers and enablers’.19 Finally, researchers should be fully cognisant of pre-existing dynamics, appreciating that adaptation is not just a programme-level phenomenon but a system-level capability.20 In other words, a worthwhile intervention is not merely something that has been implemented. It betters the system as a whole.

Ethics statements

Patient consent for publication

Ethics approval

Not applicable.

References

Footnotes

  • Contributors PH wrote the paper and is the guarantor.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Provenance and peer review Commissioned; internally peer reviewed.

Linked Articles