Article Text

Download PDFPDF

Why evaluate ‘common sense’ quality and safety interventions?
Free
  1. Angus IG Ramsay,
  2. Naomi J Fulop
  1. Department of Applied Health Research, University College London, London, UK
  1. Correspondence to Dr Angus IG Ramsay, Department of Applied Health Research, University College London, 1-19 Torrington Place, London WC1E 7HB, UK; angus.ramsay{at}ucl.ac.uk

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

At times, the decision to redesign a healthcare service may be driven by a sense that ‘something must be done’, for instance evidence of a significant failure within a hospital or national data indicating variable provision of evidence-based care. Under such circumstances, planners may look to their past experiences or let themselves be guided by research evidence; they may also turn to solutions perceived as self-evidently good ideas. Examples of such apparently ‘common sense’ interventions include the ongoing drives towards integration of various domains of care1 and 7-day working2: these are commonly seen as likely to bring about such desirable improvements as increased provision of evidence-based care and better patient experience and outcomes.

Perhaps another apparently common sense intervention is the introduction of single-room accommodation, the impact of which in a hospital based in the English NHS is evaluated by Maben et al.3 By moving staff and patients to a nearby, newly built hospital, the cost and disruption likely to result from converting a hospital from traditional wards and bays to single rooms were avoided, making this intervention relatively straightforward. Further, the intervention might reasonably be expected to effectively address challenges such as mixed sex wards and healthcare-associated infection control, while also providing a care environment more in line with patient preferences. Indeed, such benefits were anticipated when the studied hospital opened.4

Maben et al demonstrate several benefits of evaluating such interventions, making clear in the process that they are anything but straightforward. For example, they show the value of using research evidence to inform the selection of a range of impact measures, covering such key domains as quality and safety, staff and patient experience, and crucially cost. Further, by analysing more than one time point, both prechange and postchange, and including ‘control’ hospitals in their analysis, the nature of impacts and the degree to which they might be attributed to the studied intervention—as opposed to wider secular trends—can be assessed. Finally, conducting in-depth research in more than one care setting within the participating hospitals permits a nuanced analysis of both positive and negative outcomes of the shift to single-room accommodation.

As a result, Maben et al3 present evidence that change—no matter how well intentioned—is most unlikely to prove a panacea; rather, it will have multiple complex effects on the organisation, provision, experience and outcomes of care. Further, the authors demonstrate that these effects (whether positive or negative) may vary across services, and that they will not be perceived by staff or patients as purely a good or bad thing, but rather as a combination of characteristics that will be valued differently by different stakeholder groups.

It is, thus, demonstrated that, in settings as complex as healthcare, even seemingly straightforward interventions are unlikely to have straightforward impacts, and the unintended consequences of change may take many forms.5 ,6 Planned outcomes may not materialise in the form or direction anticipated (eg, there was no significant additive benefit of single-room accommodation on healthcare-associated infections—perhaps as a result of infection control having advanced to such a degree, more generally, in the English NHS and elsewhere). Further, there may be impacts beyond the scope anticipated by planners. Importantly, such consequences may be positive or negative: both are potentially valuable to understanding the change that has been conducted.

The principal contribution of evaluations of the kind described here is that they enrich our understanding of the issues at hand. Second, by analysing a range of factors common to many contexts, such research identifies lessons and principles that may be generalised to other settings where equivalent changes are under consideration. Third, it illustrates vividly how complex the implications of a significant change to organisation and provision of care might be, and how variably such a change might be perceived by different stakeholder groups; by extension, evaluations such as the one presented by Maben et al3 demand that planners consider potential change from multiple perspectives.

In recognising the value of evaluation, consideration should be given to the intensity of evaluation that is appropriate, depending on the need and purpose.7 Large-scale, costly interventions require evaluation of a correspondingly substantial character, using, like Maben et al, a range of methods over an extended time period, in order to develop learning that might be of value more widely.8 Many smaller-scale service changes may not necessarily require such a ‘high-effort’ evaluation, but all service changes are likely to benefit from some evaluation, such as local audit; indeed, to insist on only high-effort evaluation may be seen as an example of the best being the enemy of the good. There are persuasive arguments in favour of ‘good enough’ evaluation, where change leaders evaluate their intervention in terms of selected key outcomes, potentially in collaboration with researchers to articulate the purpose of change and to identify meaningful measures.7

Further, regardless of the scale of evaluation proposed, planners should be clear about why they wish to carry out a given change, what it will achieve and how its objectives will be met.9 This clarity, which evaluators can help the planners to achieve, will support evaluation in terms of generating meaningful impact ‘measures' (both qualitative and quantitative), whichever approach is adopted. However, it is also likely to support development and implementation of the change itself, from making a compelling case to stakeholders, in setting objectives and managing progress against them, and in knowing whether or not the change has delivered the expected impact. Given these potential benefits, perhaps it is the embedding of evaluation in change that truly represents common sense.

References

Footnotes

  • Competing interests None declared.

  • Provenance and peer review Commissioned; internally peer reviewed.

Linked Articles