Article Text

A scoping review of real-time automated clinical deterioration alerts and evidence of impacts on hospitalised patient outcomes
  1. Robin Blythe1,
  2. Rex Parsons1,
  3. Nicole M White1,
  4. David Cook2,
  5. Steven McPhail1,3
  1. 1 Australian Centre for Health Services Innovation, Centre for Healthcare Transformation, School of Public Health and Social Work, Faculty of Health, Queensland University of Technology, Kelvin Grove, Queensland, Australia
  2. 2 Intensive Care Unit, Princess Alexandra Hospital, Metro South Health, Brisbane, Queensland, Australia
  3. 3 Digital Health and Informatics, Metro South Health, Brisbane, Queensland, Australia
  1. Correspondence to Robin Blythe, Queensland University of Technology, Brisbane, QLD 4000, Australia; robin.blythe{at}qut.edu.au

Abstract

Background Hospital patients experiencing clinical deterioration are at greater risk of adverse events. Monitoring patients through early warning systems is widespread, despite limited published evidence that they improve patient outcomes. Current limitations including infrequent or incorrect risk calculations may be mitigated by integration into electronic medical records. Our objective was to examine the impact on patient outcomes of systems for detecting and responding to real-time, automated alerts for clinical deterioration.

Methods This review was conducted according to the Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews checklist. We searched Medline, CINAHL and Embase for articles implementing real-time, automated deterioration alerts in hospitalised adults evaluating one or more patient outcomes including intensive care unit admission, length of stay, in-hospital cardiopulmonary arrest and in-hospital death.

Results Of 639 studies identified, 18 were included in this review. Most studies did not report statistically significant associations between alert implementation and better patient outcomes. Four studies reported statistically significant improvements in two or more patient outcomes, and were the only studies to directly involve the patient’s clinician. However, only one of these four studies was robust to existing trends in patient outcomes. Of the six studies using robust study designs, one reported a statistically significant improvement in patient outcomes; the rest did not detect differences.

Conclusions Most studies in this review did not detect improvements in patient outcomes following the implementation of real-time deterioration alerts. Future implementation studies should consider: directly involving the patient’s physician or a dedicated surveillance nurse in structured response protocols for deteriorating patients; the workflow of alert recipients; and incorporating model features into the decision process to improve clinical utility.

  • Information technology
  • Decision support, clinical
  • Quality improvement
  • Healthcare quality improvement
  • Decision support, computerized

Data availability statement

Data sharing is not applicable as no data sets were generated and/or analysed for this study.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

WHAT IS ALREADY KNOWN ON THIS TOPIC

  • Prior reviews have shown that early warning systems for clinical deterioration may not improve patient outcomes. However, the reasons for this remain unclear.

WHAT THIS STUDY ADDS

  • Response protocols, including attending physician involvement, are likely more important than type of alert used to detect clinical deterioration.

HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE AND/OR POLICY

  • Organisations implementing early warning systems for clinical deterioration should carefully consider the structure of alert responses, including the clinical utility of these alerts to responsible staff.

Introduction

In acute care, a deteriorating patient is defined as one who moves from one clinical state to a worse state that increases their risk of morbidity and mortality.1 Clinical deterioration is associated with adverse sequalae, including long lengths of hospital stay, disability and death,1–3 and can require admission to intensive care units (ICUs). Using data including vital signs, high-risk patients can potentially be identified early.4 Reviews of models for identifying deteriorating patients have suggested that adverse events can be predicted minutes to days in advance with reasonable accuracy.5–8

Predicting clinical deterioration has become a popular line of research inquiry.5 8 A common approach is the Early Warning Score (EWS), a model of patient vital signs and nursing assessments that can be used to determine when to alert clinicians that patients are deteriorating.9 Models may be knowledge-driven, using clinical judgement to determine what constitutes deterioration, or data-driven, using statistical or machine learning techniques to estimate associations between observed variables and adverse events.5 When a predetermined score is reached, a clinician is typically notified; while many hospitals might use the same EWS, the clinician receiving the alerts and the subsequent clinical response are not well understood.10

For patients to benefit from timely prediction, it is important that developed deterioration models can be readily implemented to assist with decision support for clinical teams. Effective implementation can facilitate early actions that have meaningful remediating impact on deteriorating patients. Yet, studies investigating the implementation of EWSs have reported mixed results for patients, with a large majority of studies finding no detectable improvement in important clinical outcomes including in-hospital mortality or cardiac arrest when compared with clinical judgement alone.7 8 10–12

A possible reason for this lack of evidence is a lack of consideration for practical integration into clinical decision support systems. To assist with decision support, validated prediction models should offer enough information for clinicians to be alerted and respond quickly, leading to better outcomes.13 There is evidence to indicate that skilled clinicians can identify markers of deterioration over similar time frames to deterioration models.14 15 If clinical deterioration models cannot augment clinical judgement, allow clinicians to spend less time on surveillance, or pre-empt deterioration, the incremental utility of models for monitoring deterioration compared with current practice may be negligible.

Limitations in the implementation of deterioration models include infrequently and incorrectly calculated risk scores,16 pressure to refer fewer cases to the Medical Emergency Team or Rapid Response Team (RRT)12 and inconsistent or untimely response to escalation.7 By using deterioration models embedded in digital systems, including integrated electronic medical records (iEMR), also known as electronic health records, some of these issues might be mitigated. Advances in digital infrastructure support automated data collection and processing, enabling decision analysis.17 18 Models can use longitudinal data to provide dynamic risk prediction for individual patients and provide potentially useful information for clinicians in near real time.

With these recent and ongoing advances, it is timely to consider the current scope of literature related to clinical deterioration models and their associations with improved patient outcomes. While prior reviews have investigated the link between prediction tools and patient outcomes,7 8 10 11 ours is the first to focus on the delivery and response to deterioration alerts. We selected a scoping review methodology to explore the current approaches for responding to automated deterioration alerts and their evaluation with respect to patient outcomes.

We chose to use a scoping review approach based on established guidelines.19 20 First, our study sought to address a known gap in the literature identified in previous research10 by describing the recipients of deterioration alerts and any subsequent clinical actions guided by these alerts. Our classification of alert recipients and clinical responses were conducted post hoc, as there was insufficient prior evidence to guide the categorisation of alert responses into distinct fields. Second, we applied narrative rather than quantitative synthesis, on the basis that a wide variety of intervention-specific factors may have impacted study outcomes and results between papers were unlikely to be directly comparable. Finally, we included a broad variety of studies, from randomised controlled trials to nursing implementation descriptive articles. A systematic review may have excluded all but the most rigorous studies on the basis of methodological quality; however, as our goal was to explore the range of actions following deterioration alerts, we did not want to omit potentially useful information.

Objectives

Our research question was whether prediction models and associated real-time alert protocols for detecting clinical deterioration led to improved patient outcomes compared with standard care. For clinical deterioration models identified in our review, we considered available evidence for how alerts were triggered and the clinical response to alerts generated. We focused on the following patient outcomes to define clinical deterioration based on a conceptual overview of clinical deterioration: in-hospital cardiac or respiratory arrest, in-hospital mortality, admission to ICU and hospital length of stay (LOS).1

Methods

No patients or members of the public were included in this study; ethics approval was not required as we analysed only papers published in the public domain. No scoping review protocol has been previously published. The Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews checklist was used for this review and can be found in the online supplemental file.21

Supplemental material

Search strategy and eligibility criteria

We searched for all journal and conference articles that evaluated automated, electronically integrated deterioration alerts for hospitalised adults with at least one of the following patient outcome indicators: in-hospital cardiac or pulmonary arrest (IHCPA), in-hospital mortality, admission to ICU, and hospital LOS. Articles were excluded if they did not use a control group, focused on paediatric or obstetric patients, used condition-specific models (eg, sepsis, drug overdose), or did not include at least one of the aforementioned patient outcomes. Articles that only evaluated ICU patients were also excluded as these patients can already be classed as being in a state of critical illness.

The three included databases, PubMed, Embase and the Cumulative Index to Nursing and Allied Health Literature(CINAHL), were last searched on 18 February 2021. All publication years and languages were included. The PubMed search string is shown in table 1 as an exemplar, and equivalent search strings were used in Embase and CINAHL using the Polyglot tool.22 In addition, the reference lists of included articles were also screened for articles that may have been relevant for inclusion. Search calibration was performed for PubMed using the Bond University Systematic Review Accelerator.23 Five studies were identified as seed studies by an initial PubMed search24–28 and used to refine the search to exclude terms that added many studies without including relevant articles (eg, ‘predict*’, ‘model’), and include relevant terms (eg, ‘algorithm’).

Table 1

PubMed search string

Standard care was defined as a combination of clinical judgement of recognising deteriorating patients and, if in use, any EWS without computer-generated alerts, in addition to the RRT for deteriorating patients. The intervention being reviewed was the automated alert following a clinical deterioration prediction, and any modified protocols as a result of the alert.

Study selection and data extraction/synthesis

Two authors (RB, RP) independently screened abstracts using the Rayyan platform for literature review coordination.29 One author (NW) resolved disputes. Included articles were passed to independent full-text review (RB, RP) before NW resolved disputes. Data extraction was performed by RB, with items verified by RP. Extracted data related to study design, setting (facility type, patient population), control and intervention groups, prediction models (variables used, threshold for alert, calculation frequency), alert response (staff notified, required actions) and the outcomes noted above. Outcomes included effect sizes, CIs and measures of statistical significance, with a cut-off value of p<0.05. Prediction models are summarised in the online supplemental file.

Results

Database searches identified 639 studies (figure 1). After abstract screening, 40 unique studies were passed to full-text review. Of those, 17 studies were included, as well as 1 study identified through citation searching of reviewed articles (table 2).

Figure 1

Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) flow chart of included studies. EHR, electronic health record; ICU, intensive care unit.

Table 2

Study setting, design and control and intervention characteristics

Study design

Two studies used randomised controlled designs24 30 and a further three studies compared control and intervention group patients using prospective cohort designs.27 31 32 Twelve studies used quasi-experimental designs using either retrospective or prospective28 33 data collection. One study provided insufficient information to classify the study design.34 Three studies used data from multiple sites,26 35 36 with the remaining 15 occurring at a single site. Sample size was disclosed for all but two studies27 37 and ranged from 571 to 3 74 838 patient episodes.

Deterioration models

All studies alerted clinicians when a predetermined threshold was reached. Eleven studies used formulae that triggered alerts based on model outputs, for example, the risk score associated with adverse events. Of these, five studies reported developing their own models based on statistical measures of prediction accuracy, including positive/negative predictive values and/or alert frequency.24 26 30 31 33 Six studies adopted models already developed and available in the literature.25 28 32 35 38 39 The remaining seven studies selected items using a combination of literature and clinical judgement27 34 37 or did not describe the method for selecting alert thresholds.36 40–42

Studies described their systems as: a Modified Early Warning Score (MEWS);38 39 41 the EWS;34 the National Early Warning Score (NEWS);35 a combination of the NEWS, the Chronic Respiratory EWS and a palliative care tool;28 the Central Manchester University National Health Service Foundation Trust EWS;42 and the Rothman Index.25 One study used the systemic inflammatory response syndrome criteria.40 A multisite study used a variety of EWS across various sites,36 while five studies reported customising an existing EWS for their institutions.27 31 32 34 37 Three studies used logistic regression models with rolling feature windows based on the last 12 hours or 24 hours.24 26 30 Further details on prediction models identified is provided in the online supplemental file.

Data capture

Seven studies used manual data entry for vital signs and lab results, whereby nurses were required to enter all information into the iEMR.24 25 27 30 32 37 42 Three studies used automated data sources, which collected vital signs directly from patient monitors and/or lab results from the iEMR.33 34 40 One study used automated iEMR data combined with patient demographics and pre-existing comorbidities.26 The remaining seven studies used a mix of automatically collected labs and vitals supplemented with manually entered variables such as the patient’s conscious state.28 31 35 36 38 39 41

Alert calculation

All studies automatically calculated alerts. Seven studies used continuous calculation methods, without mention of the frequency of calculation.24 30 33–35 39 40 Two studies each used 5 min,31 32 hourly25 26 and 4 hourly25 26 recalculation rates. Four studies used dynamic updating based on patients’ last recorded scores, ranging from every 12 hours to every 15 min based on risk.28 38 41 42 One study did not specify calculation frequency, only mentioning that scores were updated during rounds as these practices may have differed across the study’s multiple sites.36

Alert targets and responses

Alerts were automatically sent to different clinical staff members. Alert responders are described in table 2. Alerts were sent to charge nurses via pager,24 31 a crisis nurse via any available iEMR kiosks,27 and a remote monitoring nurse via the iEMR.26 Alerts directed to the RRT attending40 physician or director37 were sent via pager, and to the RRT nurse via mobile phone.30 One study used alerts to inform the RRT nurse of which patients to prospectively evaluate at the beginning of each 12-hour shift.25 Studies notified physicians via pager41 or on a screen in their offices.28 A further three studies broadcast alerts to a screen visible to the nursing team.32 33 38 The remaining five studies notified the patient’s nurse through bedside electronic means, including the iEMR.34–36 39 42

Following receiving an alert, responses varied. Charge nurses were required to conduct bedside assessments and decide on a care plan, including whether to call the RRT.24 31 One study required the crisis nurse to review alert triggers every 4 hours, view additional data and screen for sepsis.27 One study tasked a remote monitoring nurse with reviewing alerts, coordinating care with the patient’s physician and continuing to monitor patient status following escalation.26 Studies alerting the RRT attending physician,40 director37 or RRT nurse30 instructed them to evaluate the patient and act according to clinical judgement. One study used the RRT for proactive rounding in order of highest risk, in addition to standard duties.25 Three studies notifying patient physicians in addition to nurses either recommended28 or required41 42 escalation if the risk score was high enough. The remaining studies left escalation solely at the nurses discretion34 35 39 or did not report specific escalation protocols.25 36

In-hospital mortality

Fifteen studies assessed in-hospital mortality. Only Subbe et al reported a statistically significant reduction in in-hospital mortality (OR=0.79 (0.63–0.99))28 following implementation of real-time alerts using regression models that adjusted for patient characteristics. Evans et al reported reduced mortality on one ward where nurses received deterioration recognition and escalation training in addition to the deterioration alert (3.7%–2.6%, p=0.04), compared with another ward where only the alert was implemented (0.6%–0.5%, p value and CIs not reported).31 Of the remaining studies, five did not report estimate uncertainty for in-hospital mortality,24 26 37 39 41 although Escobar et al reported improved adjusted 30-day mortality (relative risk (RR)=0.84 (0.78–0.90)).26

Six studies reported unadjusted point estimates indicating a potential reduction in mortality following implementation of real-time alerts, though these were not statistically significant: Bedoya et al (ICU transfer or death OR=0.94 (0.84–1.05) and OR=0.90 (0.77–1.05) at academic and community facilities, respectively);35 Fletcher et al (incidence rate ratio (IRR)=0.96, p=0.89);32 Jones et al (9.5%–7.6%, p=0.19);42 Kollef et al (OR=0.947, p=0.865);30 Mestrom et al (1.6%–1.1%, p=0.9);38 Weller et al (0.5%–0.3%, no p value or CIs reported).33 Bellomo et al (1.8%–2.0%, p=0.34)36 and Fogerty et al (no effect size reported, p=0.07)40 applied regression models and reported increases in in-hospital mortality, but these were not statistically significant.

In-hospital cardiac or pulmonary arrest

Eight studies assessed IHCPA. Two studies reported a statistically significant reduction in cardiac arrest incidence following implementation of alerts: Heller et al (0.53%–0.21%, p<0.001)41 and Subbe et al (0.65%–0.08%, p=0.002).28 Fletcher et al reported an increase in cardiopulmonary arrest rates (IRR=1.46, p=0.43), but this was not statistically significant.32 Bellomo et al reported a decrease in the use of mechanical ventilation in the USA (1.8%–1.6%, p=0.17) but noted that this increased in other countries; there was also a decrease in cardiac arrest rates (0.4%–0.3%, p=0.34) that was not statistically significant.36 Jones et al also reported a decrease (0.4%–0.0%, p=0.21) that was not statistically significant.42 The remaining three studies reported a reduction but did not apply formal hypothesis testing: Duncan et al (21–9), Heal et al (3–1) and Parrish et al (1.19–1.16) per thousand discharges.27 37 39

ICU admission

Fourteen studies assessed ICU admissions. Of these, six examined unplanned ICU admissions,25 28 32 36 40 41 and the remaining eight examined all ICU admissions.24 26 30 31 33 35 38 42 Two studies found a statistically significant change in unplanned ICU admissions: Danesh et al (8.85–6.73 per 1000 patient days, p=0.001)25 and Heller et al (3.6%–3.0%, p<0.001),41 though neither study adjusted for patient-level factors such as comorbidities. Fogerty et al (OR=1.25, p=0.036) reported a statistically significant increase in unplanned ICU admissions.40 Bellomo et al (5.4%–5.4%, p=0.95),36 Fletcher et al (IRR=1.15, p=0.25)32 and Subbe et al (1.2%–0.9%, p=0.158)28 reported no change, an increase and a decrease in point estimates of unplanned ICU admission rates, respectively, but estimates were not statistically significant.

Eight studies examined all ICU admissions. Escobar et al (RR 0.91 (0.84–0.98) for ICU admission within 30 days of alert)26 and Jones et al (0.47%–0.28%, p=0.04)42 found reductions in ICU admission rates that were considered statistically significant. Evans et al (4.1%–5.1%, p=0.23) found that there was an increase in ICU admission rates when all patients were pooled, predominantly from one ward where they reported that nurses were trained in appropriate escalation protocols, however this was not statistically significant.31 Bedoya et al found no change in the composite outcome of ICU transfer or death at academic (HR=0.94 (0.84–1.05)) or community (HR=0.90 (0.77–1.05)) sites, respectively.35 Kollef et al (OR=0.972, p=0.898),30 Mestrom et al (13.4%–10.6%, p=0.36)38 and Weller et al (5.3%–4.0%, p=0.09)33 all found slight reductions in the point estimates of ICU admission rates, though none were statistically significant. Bailey et al found that ICU admissions dropped from 16% to 14%, but did not report uncertainty.24

Length of stay

Ten studies examined LOS. Four reported a statistically significant reduction: Jones et al (9.7–6.9 days, p<0.001);42 Kollef et al (9.4–8.4 days, p=0.038);30 Escobar et al (hospital discharge HR 1.07 (1.03–1.11))26 and Bellomo et al (4–3 days, p<0.0001), due to observations at participating US hospitals (3.4–3 days, p<0.0001) with no change in non-US hospitals.36 Mestrom et al (12–11, p=0.39) noted a reduction, though this was not statistically significant,38 while Evans et al found a statistically significant increase in LOS in the ward that received additional escalation training (4.4–4.9 days, p=0.002), but not in the ward that did not (4.1–4.3 days, no p value or CIs reported).31 Four studies reported reductions but did not perform hypothesis testing: Bailey et al (7.1–6.9 days);24 Giuliano et al (14–10 days)34 and Heller et al (14.7–13.8 days).41 Duncan et al reported no change in adult LOS, and did not report effect size estimates or results of hypothesis testing.37 Patient outcomes are summarised in table 3.

Table 3

Summary of model type and patient outcomes including effect size and statistical significance, if reported

Discussion

Studies identified in this scoping review did not support the theory that real-time alerts for detecting clinical deterioration led to improved patient outcomes compared with standard care. Half of studies identified did not report statistically significant improvement in the selected patient outcomes, although we acknowledge limitations related to statistical significance as a sole criterion for assessing impacts.43 44 Based on our findings, this review has identified several key emergent themes to guide further work in this field.

All four studies that reported a statistically significant improvement in more than one outcome26 28 41 42 were unique in that the patient’s physician was either directly alerted or consulted as part of a structured protocol. Studies alerting the charge nurse24 31 or crisis nurse27 leading to RRT escalation did not report improved outcomes, though one study with proactive rounding and care planning from the RRT nurses reported a reduction in ICU admission rates.25 These findings suggest that nurses and rapid responders may not benefit from automated alerts, but physicians may when consulted using structured care protocols.

Model type, such as MEWS or logistic regression, was not associated with improved patient outcomes, indicating that no one type of EWS was necessarily superior. Given that 12 studies also implemented an EWS between the control and intervention stages, our review was consistent with reviews on this topic that there is limited evidence that clinical judgement supplemented by EWS lead to improved outcomes compared with clinical judgement alone.7 11

There was a similar number of studies that reported manual and automated data collection approaches. While continuously recalculated risk scores were not independently associated with improved patient outcomes over those with variable or less frequent recalculation, they were likely still beneficial. Automated information collection and tracking methods free up nursing time for patient care, leading to other benefits not within the scope of this review; one study found automated data collection saved nurses 1.6 min per patient review.36

Providing additional training to nurses for the recognition and escalation of deteriorating patients may have contributed to improvement in outcomes in three studies.28 31 33 One study noted improvements in both in-hospital cardiac arrest and mortality,28 while another noted reductions in ICU admissions and mortality but their results did not meet their selected significance threshold.33 One study provided this additional training to one ward and not the other,31 finding that the ward receiving additional training observed a reduction in in-hospital mortality but an increase in LOS and ICU admission rates. This indicates that additional training in recognising deterioration may increase the sensitivity of escalation, potentially trading increased ICU admissions and LOS for reduced mortality.

Study design

Five studies employed simultaneous control and intervention groups to account for any confounding interventions or changes to care that might have occurred between cohorts.24 27 30–32 Of these five studies, only Kollef et al found an improvement in any patient outcomes, a 1 day reduction in LOS.30 Two studies were randomised, by Bailey et al and Kollef et al, and these occurred at the same hospital in Missouri using the same prediction model, although with different patient populations and responses to alerts.24 30

Weller et al 33 and Escobar et al 26 used a before-and-after design and statistically adjusted for existing trends in outcomes, with the latter reporting statistically significant improvements in LOS and ICU admission. While statistical techniques can attempt to correct for confounding factors, they do not necessarily account for limitations in study design. These methods add a degree of robustness, however, hospitalised patient outcomes often improve over time independent of deterioration alerts among studies with large samples and long time frames.45 46

We suggest that future studies report findings using the Enhancing the QUAlity and Transparency Of health Research Network to improve transparency and reproducibility.47 Four studies did not undertake any form of uncertainty analysis and simply reported raw numbers or rates.27 34 37 39

Limitations

This review was not able to identify significant amounts of detail in clinical responses to deterioration alerts. Many studies reported only the clinician notified and the method of notification, but not detailed clinical actions following notification. Study citations were explored to search for additional information, but this usually did not provide significant detail.

Detailed risks of bias or methodological limitations were not explored in-depth as those topics are more appropriate for a systematic review. In addition, it was sometimes unclear whether patient populations included obstetric or paediatric patients. It was often implied that these patients were excluded, and findings from this review may not be generalisable to these populations.

We were uncertain whether ICU admissions, particularly those considered unplanned, were appropriate outcomes. Unplanned ICU admissions may result from adverse events or continued deterioration. However, earlier recognition may simply transform an unplanned admission into a planned one as a precautionary measure which may or may not ultimately benefit the patient or net hospital resources used. More generally, ICU admission is both an indicator of severity and common part of treatment escalation. It is questionable whether reductions in ICU admissions is an appropriate outcome or goal compared with cardiac arrests, mortality and LOS.

Implications and future research directions

We identified that a likely reason for the inconsistency of evidence for real-time deterioration alerts was that alerts may have simply notified nurses of what they already observed. If EWS are capable of streamlining the monitoring process, potentially even replacing nursing surveillance with a similar level of accuracy, these tools may provide significant clinical utility by allowing clinical staff to focus less on data entry and more on treatment. Using EWSs to take on some patient surveillance burden may allow the redistribution of limited staffing resources to higher-risk patients. However, care should be taken that alerts do not become too frequent in the process, subjecting responders to alert fatigue.35 Defining alert thresholds with alert fatigue in mind is one potentially useful approach to mitigate this concern.48

Identifying the projected outcome and the causes for classification may be more informative for nurses and physicians seeking to make appropriate decisions. One recent study trained multiple models on ICU patients, finding that models trained for specific causes of ICU admission, such as infection and suspected sepsis, outperformed all-cause models such as the NEWS.49 They hypothesised that the varied reasons for ICU admission were better predicted individually, though multiple models may be difficult to implement. This may also save clinical time by directing alert responders to suitable treatment protocols. For example, a deterioration model that can identify potential sepsis can be used to recommend starting a patient on a sepsis bundle; this level of insight would not be gleaned from using an all-cause deterioration model.

Conclusions

In this review, we examined studies implementing real-time clinical deterioration alerts in hospitals. Most studies in this review did not detect improvements in patient outcomes following the implementation of real-time deterioration alerts. We identified that escalation to the patient’s physician as part of structured care protocols, rather than the type of warning system, was the primary common thread among studies with multiple improvements in patient outcomes. Considering the workflow of the alert recipient and employing more sophisticated methods that incorporate model features may provide greater clinical utility.

Data availability statement

Data sharing is not applicable as no data sets were generated and/or analysed for this study.

Ethics statements

Patient consent for publication

Ethics approval

Not applicable.

References

Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

Footnotes

  • Contributors All authors have fulfilled the following criteria: Substantial contributions to the conception or design of the work; or the acquisition, analysis, or interpretation of data for the work; and drafting the work or revising it critically for important intellectual content; and final approval of the version to be published; and agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. Guarantor: RB.

  • Funding This study was funded by NHMRC (1181138); Cooperative Research Centre (DHCRC-0058).

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

Linked Articles