Article Text

This article has a correction. Please see:

Download PDFPDF

Impact of introducing an electronic physiological surveillance system on hospital mortality
  1. Paul E Schmidt1,
  2. Paul Meredith2,
  3. David R Prytherch2,3,
  4. Duncan Watson4,
  5. Valerie Watson5,
  6. Roger M Killen6,
  7. Peter Greengross6,7,
  8. Mohammed A Mohammed8,
  9. Gary B Smith9
  1. 1 Medical Assessment Unit, Portsmouth Hospitals NHS Trust, Portsmouth, Hampshire, UK
  2. 2 TEAMS Centre, Portsmouth Hospitals NHS Trust, Portsmouth, Hampshire, UK
  3. 3 School of Computing, University of Portsmouth, Portsmouth, Hampshire, UK
  4. 4 Intensive Care Medicine and Anaesthesia, University Hospitals, Coventry and Warwickshire NHS Trust, Coventry, UK
  5. 5 Critical Care Outreach, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, UK
  6. 6 The Learning Clinic, London, UK
  7. 7 Department of Primary Care and Public Health, Imperial College Healthcare NHS Trust, London, UK
  8. 8 Quality & Effectiveness, School of Health Studies, University of Bradford, Bradford, UK
  9. 9 School of Health & Social Care, University of Bournemouth, Bournemouth, UK
  1. Correspondence to Dr Paul Schmidt, Medical Assessment Unit, Portsmouth Hospitals NHS Trust, C Level, Queen Alexandra Hospital, Cosham, Portsmouth, Hampshire PO6 3LY, UK; Paul.Schmidt{at}porthosp.nhs.uk

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Shaw et al 1 are correct in pointing out that the use of a single year's mortality data as a baseline comparator could introduce bias to our findings. They and Van Schalkwyk2 are also right to highlight the national reduction in hospital mortality rates over the past decade. However, we did not rely solely upon the overall mortality reduction to come to the conclusion that there was an association between the timing of the introduction of an electronic physiological surveillance system (EPSS) and the reduced mortality at the two study hospitals.3

Both Shaw et al and Van Schalkwyk appear to ignore the significance of the main findings of our study. The reductions in annual observed deaths are concentrated in specific years (Portsmouth 2006 and 2009, years 2 and 5 after baseline; Coventry 2009, year 3 after baseline).3 Also, the monthly p-charts reveal very significant, abrupt and sustained changes in observed deaths in both hospitals within the space of a few months of the full rollout at each site. Finally, these reductions in mortality are associated with substantially decreased month-to-month variation in deaths. As we described in our paper,3 and as Van Schalkwyk observes, the signal is clearer at Coventry in 2009 because a prior statistically significant change in hospital deaths had already occurred at Portsmouth in 2006, coincident with the initial deployment of EPSS on the Acute Medicine Unit. This lends support to our view that the majority of avoided deaths over the 7 years occurred in the periods following the introduction of EPSS. However, we were careful to point out that not all avoided deaths in this period were necessarily the effect of EPSS.

Van Schalkwyk uses long-run standardised (age-adjusted) mortality rates (SMR) to calculate a ‘secular adjustment factor’, which he then uses to adjust observed in-hospital deaths. However, population SMRs represent the risk at a particular age that a proportion of the population will die. They have been declining because of a huge range of factors including lifestyle and technological changes even if healthcare delivery improvements are excluded. In contrast, hospital deaths are drawn from a different sample of the population, namely those who, despite increased longevity, have reached a point where they need to be hospitalised for an acute illness, usually then with more comorbidities.

While Van Schalkwyk is correct to point out the potential confounding effect that varying the number of admissions coded with palliative care can have on hospital standardised mortality ratio (HSMR) calculation, it is important to point out that we used the observed deaths in the HSMR 80 data set, not HSMR. The only way a change in ‘palliative care’ deaths could have contributed to the change in observed deaths that we saw would have been for there to be a sudden change in practice in 2006 or 2009 such that a large proportion of patients requiring palliative care died at home or in a hospice rather than in a hospital. There is no evidence to support this.

We agree with Shaw et al that well-intentioned interventions in healthcare occasionally produce unintended harm. Arguably for EPSS to have had an effect, earlier or more frequent escalation to junior doctors was anticipated. We have no evidence to support the view that EPSS led to either an inappropriate increase in workload or a reallocation of junior doctor time and priorities. The introduction of EPSS did not lead to a change in junior doctor staffing.

Shaw et al's view that a multicentre randomised controlled trial (RCT) would assist in identifying whether EPSS was the true reason for the fall in mortality fails to recognise the complexity, required duration and expense of implementing such a trial. In addition, RCTs may not necessarily be the best research method for complex interventions such as ours.4–6 Perhaps the best that can be hoped for is a much larger study of the type that we conducted, but comprising many hospitals using a de facto multicentre stepped wedge approach.

Finally, two of the authors of the letter by Shaw et al 1 appear to have undeclared conflicts of interest pertinent to their interest in our research.7 ,8

References

Footnotes

  • Competing interests None.

  • Provenance and peer review Not commissioned; internally peer reviewed.

Linked Articles