Objective The need for clinical staff to reliably identify patients with a shortened life expectancy is an obstacle to improving palliative and end-of-life care. We developed and evaluated the feasibility of an automated tool to identify patients with a high risk of death in the next year to prompt treating physicians to consider a palliative approach and reduce the identification burden faced by clinical staff.
Methods Two-phase feasibility study conducted at two quaternary healthcare facilities in Toronto, Canada. We modified the Hospitalised-patient One-year Mortality Risk (HOMR) score, which identifies patients having an elevated 1-year mortality risk, to use only data available at the time of admission. An application prompted the admitting team when patients had an elevated mortality risk and suggested a palliative approach. The incidences of goals of care discussions and/or palliative care consultation were abstracted from medical records.
Results Our model (C-statistic=0.89) was found to be similarly accurate to the original HOMR score and identified 15.8% and 12.2% of admitted patients at Sites 1 and 2, respectively. Of 400 patients included, the most common indications for admission included a frailty condition (219, 55%), chronic organ failure (91, 23%) and cancer (78, 20%). At Site 1 (integrated notification), patients with the notification were significantly more likely to have a discussion about goals of care and/or palliative care consultation (35% vs 20%, p = 0.016). At Site 2 (electronic mail), there was no significant difference (45% vs 53%, p = 0.322).
Conclusions Our application is an accurate, feasible and timely identification tool for patients at elevated risk of death in the next year and may be effective for improving palliative and end-of-life care.
- trigger tools
- decision support, computerized
- healthcare quality improvement
Statistics from Altmetric.com
A fundamental obstacle to improving palliative and end-of-life care (PEOLC) is the reliable identification of patients with shortened life expectancy or unmet palliative needs. Many organisations recognise the importance of early identification of patients who might benefit from palliative interventions.1 2 While patients dying of cancer are often referred for palliative services in the final months of life, patients with non-cancer illnesses or frailty typically make their palliative transitions only in the final weeks or days of life, if at all.3 Effective PEOLC interventions exist4 5; however, published studies have typically relied on research staff to identify patients.4 6 In practice, patient identification falls to the clinical staff who have numerous other responsibilities competing for their attention, decreasing the number of patients identified and limiting the effectiveness of PEOLC interventions.
Palliative interventions are often triggered when patients are felt to have a poor prognosis,7 based on sentinel events, clinical findings or clinician gestalt. However, clinicians frequently overestimate survival,8 9 which would delay interventions. Various prognostic methods for identification have been proposed—such as the ‘surprise question’ or clinical models.10–13 Most of these have shown low or moderate accuracy at best,14 15 but their principal limitation is the fact that they depend on a clinician who has the time and inclination to use them at the bedside.
An ideal solution would be a tool that is both accurate and automated—removing the need for clinical staff to participate in the initial identification of patients—providing timely prompts to a clinical team to perform a holistic assessment and address any unmet palliative needs. Others have highlighted the potential use of existing data in the electronic health record (EHR) to help drive such a clinical decision support tool.16 17 Recently, van Walraven et al described the Hospitalised-patient One-year Mortality Risk (HOMR) score for predicting 12-month mortality for patients admitted to hospital based on 12 administrative data points routinely coded at the time of discharge.15 18 In the present study, we developed a modified version of HOMR (mHOMR) based only on data fields available at the time of admission. We then created a computerised application that automatically calculated mHOMR scores for all patients as they were admitted to hospital and prompted the admitting team to consider PEOLC interventions for patients having an elevated mortality risk. The objective of this study is to evaluate the feasibility of using this tool to prompt the clinical team to consider palliative interventions.
We conducted a two-phase feasibility study of implementing a notification tool based on mHOMR at two quaternary healthcare facilities in Toronto, Canada (see online supplementary appendix A for site details).
Development of mHOMR
The HOMR index estimates the probability of death within 1 year of admission to hospital based on data available in the Canadian Institute for Health Information-Discharge Abstract Database (CIHI-DAD).18 The model includes 12 variables as well as several interaction variables and has been externally validated with excellent discrimination (C-statistic=0.89–0.92, depending on the validation cohort) and calibration.15
Three of the data fields included in the original HOMR model—admitting diagnosis, Charlson Comorbidity Index and the use of supplemental oxygen at home—were not available at the time of admission within the EHRs of our two hospitals but were coded postdischarge. As our goal was to create a model able to identify patients at elevated risk of mortality within 12 months so that PEOLC interventions could be initiated while the patient was still in hospital, we modified the existing HOMR score to develop a model that incorporates only data fields available within the EHR at the time of admission to hospital, using a similar approach to the original derivation.18
Using the same large cohort of patients used to derive HOMR, we used bootstrapping methods described by Austin et al 19 20 to select variables independently associated with 12-month survival. We used backward variable selection in logistic regression models with mortality as the outcome in a series of bootstrap samples of the original cohort. Covariates not included in a particular bootstrap model were assigned a parameter estimate of zero (0). Final regression coefficients were determined by calculating the mean regression coefficients of the 1000 bootstrap models, retaining in the final model those variables whose non-parametric 95th percentile credible interval excluded zero (0).
We used the methods of Sullivan et al 21 to create a point system—called the ‘mHOMR score’—which was calculated for each patient as a function of their hospital data. For each patient, the model would return an estimate of the probability of death within 12 months of admission to hospital. The model was assessed by measuring overall fit (using Regenkirke’s R2) model discrimination (using the C-statistic) and model calibration (using the calibration slope)). All fit assessments were optimism corrected using bootstrapping techniques (see online supplementary appendix B). Our assessments revealed mHOMR to have excellent discrimination (C-statistic=0.89)—nearly as high as HOMR (C-statistic=0.89–0.92)—but using only data fields available at the time of admission.
The table 1 presents details on the data fields used by mHOMR and how the mHOMR score is calculated. The online supplementary appendix C presents an example calculation of the mHOMR score using table 1 and a fictional patient admission. The data fields included in the final mHOMR model were: patient age and sex, admitting service, whether the current admission was an urgent 30-day readmission, number of emergency department (ED) visits in the past 12 months, admissions by ambulance in the past 12 months, patient’s living status (independent at home, rehab facility, at home with home care, nursing home, chronic care hospital), admission urgency of the current admission (elective, ED with ambulance, ED without ambulance) and whether the current admission was directly to the intensive care unit. The original HOMR score18 included an additional three data fields: admitting diagnosis, Charlson Comorbidity Index and the use of supplemental oxygen at home.
Discussion within the research team—which included internal medicine physicians at the two hospital sites where we deployed mHOMR—focused on the importance of avoiding alert fatigue and false positive notifications. We examined the data from the derivation of mHOMR and estimated that a threshold mHOMR score of 0.21 (ie, an estimated 21% risk of death within 12 months of admission to hospital) would result in a manageable number of patients flagged by mHOMR, given the clinical resources available at our two hospital sites. This threshold resulted in a sensitivity of 59%, specificity of 90%, positive predictive value of 36%, a negative predictive value of 96%, positive likelihood ratio of 5.9, negative likelihood ratio of 0.46 for death within 12 months of admission.
For each hospital site, we developed an application to pull the required data fields from the EHR and calculate the mHOMR score for each new admission to hospital. The application for each hospital site was developed with the help of hospital IT staff and built within the programming languages of the hospital EHRs themselves. Once the application was built and tested, no further involvement was necessary from hospital IT staff and no maintenance was required for the mHOMR model. For each newly admitted patient, the application would calculate their mHOMR score and if it was above our threshold of 0.21, a notification was set to the patient’s admitting team. The notifications advised that the patient was at elevated risk for mortality in the next year and suggested the team consider a palliative approach to care and described several potential interventions (online supplementary appendix D). We avoided including the mHOMR score itself within the notifications sent by the application. Our goal was not to provide clinical staff with a specific prediction of mortality risk, but rather to bring their attention to a patient who might benefit from a palliative approach to care. The notifications were not prescriptive in nature—the decision about whether or not to use the interventions was left to the admitting team.
As each site used a different EHR, two approaches were used to send the notifications to the admitting team. At Site 1, notifications were sent to the admitting medical team each morning using an electronic sign-out tool in the EHR used to communicate between the medical team members, nursing staff, and other allied health team members. The notification appeared next to the patient’s name on the sign-out list and notifications were typically acknowledged by the recipient (although this was not obligatory).
At Site 2, due to technical and practical limitations, we were only able to send mHOMR notifications as an email message to the patient’s most responsible physician, typically without any acknowledgment from the recipient. The content of the notifications was identical at both sites.
We collected data in two phases, from November 2016 to August 2017. Hospital sites were chosen from convenience—our research team included members of the internal medicine and palliative care (PC) teams of the two hospital sites.
Our application collected data on newly admitted patients to the general internal medicine service at each site and calculated mHOMR scores for each new admission. No notifications were sent to admitting teams during this phase in order to ensure completeness of data collection and the reliability of mHOMR score calculation.
Our application sent notifications to the admitting team for all newly admitted general internal medicine patients with an mHOMR score above the threshold.
The project was approved by the research ethics board at both participating institutions. As we calculated mHOMR scores from data already collected as part of normal admission, we did not obtain informed consent from all physicians or patients admitted to hospital as this was impractical and unnecessary. The study did not have any effect on patient care while we ensured the technical functionality of the mHOMR application in Phase 1 as neither the mHOMR score nor mortality risk were made available to patient or healthcare provider. In Phase 2, notifications were sent to physicians about patients with a high mHOMR score. Although in some cases, the physicians would have already been aware of this risk—in a qualitative sense—there was the potential for unintended consequences of these notifications on patient care. Ideally, the notification would have triggered the admitting physician to incorporate a palliative approach to the care plan, for example, by engaging in goals of care (GoC) discussion with the patient/substitute decision-maker. These actions are already recommended on admission by policy at both sites as well as the provincial regulatory college, and may be beneficial to the patient and substitute decision-maker alike.5 However, we did not want the notification to be delivered to patients in an insensitive way or taken as a judgement that current or proposed disease-modifying or life-sustaining therapies would be ineffective and therefore should be withheld or withdrawn. We emphasised that the purpose of the notification was not to warn about mortality but to encourage the admitting physician to engage in a GoC discussion or address any unmet palliative needs. As part of the assessment of the acceptability of the tool, we conducted a qualitative study involving staff physicians, residents, patients and family members (to be published separately). We obtained written consent for participation in this component of the study.
To assess feasibility, we evaluated whether we had successfully developed a computer application to pull data accurately and reliably from the EHR and calculate an mHOMR score. To assess how the mHOMR notifications may impact patient care, we abstracted 100 consecutive patient records of those identified by mHOMR in Phases 1 and 2 at each site, comparing the prevalence of early (<72 hour postadmission) discussions about GoC and inpatient specialist PC consultation in each phase within each study site.
Patient admission diagnoses were categorised as cancer, chronic organ failure (eg, CHF exacerbation, COPD exacerbation) or a frailty-related diagnosis (eg, admission from a long-term care facility, or admission from home with a fall, confusion or another condition that would not require admission in a non-frail individual). These categorisations were derived via chart review by a trained and experienced research assistant who was unblinded to the phase of the study (EK). We compared patient data from Phase 1 and Phase 2 using Student’s t-test and the Mann-Whitney-Wilcoxon test for continuous variables, and Pearson’s χ2 test for categorical variables, with effect sizes calculated for each—Cohen’s d for t-tests, r for Mann-Whitney-Wilcoxon tests, Cramér’s V for χ2 tests. As the format of the notifications used was different, we compared Phases 1 and 2 within sites. All analyses were conducted using R.22
Despite using three fewer data fields, we found mHOMR (C-statistic=0.89) to have the same excellent discrimination as HOMR (C-statistic=0.89–0.92). In Phase 1, we found that we did not encounter issues with missing data in the nine data fields used to calculate mHOMR scores—both sites had complete data for the fields required by mHOMR. Additionally, we checked whether the tool was correctly calculating the mHOMR scores by comparing the tool’s output to hand-calculating mHOMR scores by members of our team for 50 consecutive patients. The tool calculated patient mHOMR scores reliably and without error.
In Phase 2, the application sent notifications for 610 patients over 3 months at Site 1 (15.8% of admissions) and 204 patients over 2 months at Site 2 (12.2% of admissions). Among the 400 patients whose charts we abstracted (100 from each phase at each site; see table 2), 220 (55%) were male, the mean (SD) age was 83 (7.8) and the median (IQR) length of stay in hospital was 5 (7) days. Forty-three of 400 (11%) died during the admission. One hundred forty-seven patients (38%) had an order for no cardiopulmonary resuscitation written at the time of admission (prior to the notification being sent). There were no significant differences in demographics or resuscitation order between Phases 1 and 2 or between the two study sites (table 2). Overall, 219 patients (55%) were admitted with a frailty-related condition, compared with 91 (23%) with a chronic organ failure condition, and 78 (20%) with a cancer-related condition—12 (3%) patients who generated a notification were admitted with a diagnosis that did not fit one of these three categories.
At Site 1 (integrated notification), we found that patients for whom a notification was sent in Phase 2 were significantly more likely to have an early GoC discussion or a consultation to an inpatient PC service compared with controls in Phase 1—34% versus 18%, p=0.016, V=0.18. At Site 2 (email notifications), no differences were observed in the incidence of early GoC discussion or PC consultation—45% versus 53%, p=0.322. There were no significant differences in rates of GoC discussion or PC consultation between Phases 1 and 2 within the cancer, organ failure, or frailty disease trajectories at Site 1. At Site 2, we found significantly higher rates of GoC discussion or PC consultation for cancer—100% versus 53%, p=0.002, V=0.57—and significantly lower rates for frailty—28% versus 60%, p=0.002, V=0.32—in Phase 2 compared with controls in Phase 1.
In this study, we found that it was feasible to develop and implement an automated process for using existing data in the EHR to identify patients at elevated risk of death in the coming year and prompting the admitting team to consider palliative interventions. When these prompts were integrated into the existing electronic workflow for patient care, a significantly larger proportion of patients at elevated risk of death had an early documented discussion about GoC or a PC consultation. We caution that our study was not primarily focused on assessing practice change as a result of identification.
We currently depend on clinical staff to identify patients for PEOLC interventions, typically based on severe symptoms or a poor prognosis, such as a newly diagnosed incurable illness, sentinel event, or functional decline.7 23–25 This process can be unreliable as even in the published literature reviews have found ‘no clear definitions of PC patients’ and a ‘lack of consensus concerning the attributes of illnesses needing palliation and the ambiguous use of the adjective “palliative”’.6 There is also a substantial variation in the timing of ‘early’ PC integration, and patient populations are often heterogeneous.4 We are unlikely to be able to systematically identify patients with unmet palliative needs if we apply diverse criteria to a poorly defined patient population in a labour intensive and inconsistent manner.
One commonly suggested identification tool is the surprise question (SQ): A clinician asking her/himself ‘Would I be surprised if this patient died in the next 12 months?’. An answer of ‘no’ would then act as a trigger for a more detailed assessment and appropriate PEOLC intervention. The SQ has been widely advocated and integrated into certain frameworks designed for the identification of patients in need of PEOLC.24 25 We recently published a meta-analysis examining 11 621 patients across 16 studies and found that the SQ has modest accuracy at best, missing more than a third of dying patients and returning many false positives, and performing particularly poorly for non-cancer patients.14 The SQ is not labour intensive to use, but it is highly subjective, and still relies on healthcare providers remembering to use it. Implementation studies of SQ-triggered interventions have shown evidence of very low uptake26 and qualitative studies have shown that some clinicians are unwilling to use the SQ as a trigger for PEOLC interventions, particularly in the frail elderly.26–29 Objective prognostic models have similarly shown to be little better. Some—such as the Multi-Morbidity Index30—rely on data only available after coding has taken place, which may delay PEOLC interventions. While some models10–13 have shown moderate accuracy (C-statistics of 0.66–0.74), these models still depend on a clinician who has the time and inclination to use them at the bedside.
There are several factors which give mHOMR substantial advantages over provider-dependent PEOLC triggers such as the SQ or other objective prognostic models. In terms of accuracy, the mHOMR model we developed had the same excellent discrimination as the original HOMR model (C-statistic=0.89 vs 0.89–0.92), making it more accurate than other prognostic tools available.15 31 In terms of equity, mHOMR preferentially identified patients dying with a frailty or organ failure trajectory, as opposed to those with cancer. This is important as patients with frailty and non-cancer illness are far less likely to receive PC services prior to death than patients with cancer.3 32 33 This tool may help to close that gap, even if there are other barriers to PEOLC in the non-cancer population.34 In terms of feasibility and scalability, the mHOMR model relies on just nine data points that are commonly available in EHRs at the time of admission to hospital and is based on the HOMR model which has been validated in several million patient admissions, across several jurisdictions.15 It was also a simple tool, requiring no maintenance after the initial development.
In terms of efficiency, mHOMR can function reliably without significant changes to workflow. Once implemented, it can function autonomously to identify patients who may benefit from a palliative approach to care and who may have otherwise not been identified. In terms of versatility, the notifications can prompt any specific action, including a holistic assessment and intervention with appropriate clinical action, which could include symptom management, advance care planning, GoC discussions, deintensification of treatment, social and spiritual support, community supports or a combination of these elements. The notifications can also be sent to whomever is in the best position to assess whether a patient would truly benefit from a palliative approach to care—the attending physician, all members of the admitting team, PC clinicians in hospital or even a single individual assigned to assess all patients flagged by mHOMR. Finally, the threshold at which notifications are sent is completely configurable to fit in with the clinical resources of any hospital—it can be raised to increase specificity in resource-limited settings or lowered for greater sensitivity to accommodate a more scalable intervention.
The notifications are also timely, as the mHOMR tool can identify a patient many months before death at a time when the patient is admitted to a relatively well-resourced acute care environment. Although the tool can only be triggered via an admission to hospital, it would still have an opportunity to identify much of the dying population since more than 70% of Canadians are hospitalised at some point in the final year of life.33
The differences in findings between the two sites may have been related to the way the notifications were integrated into existing workflows. Notifications that arrive via email (Site 2) were visible to only the admitting physician and may have only been viewed after the physician has left the clinical environment. Site 2 also had a much higher baseline prevalence of early GoC discussion and PC consultation, driven by the clinical resources of PC at that site and their tight integration with the internal medicine team, suggesting the possibility of a ceiling effect for the notifications.
First, although our results showed a significant increase in GoC conversations and/or PC consultation at one site, this study was not powered or intended to measure the effect on patient care. Future work will focus on the effectiveness of specific PEOLC interventions triggered by mHOMR. Second, mHOMR identifies patients solely on mortality risk, which does not always indicate uncontrolled symptoms or unmet support needs. Additionally, not all patients with a predicted model-based elevated risk of mortality would be appropriate for palliative interventions as they may not even be willing to engage the concepts involved in palliation. We intend to study the prevalence of symptoms and desire to engage in advance care planning in a subsequent study. Third, the intervention was not accompanied by specific PEOLC education for those receiving the notifications, which may have reduced their effectiveness. Not all physicians, nurses and other allied health team members are trained or comfortable with PEOLC discussions—if no inpatient PC service is available, it may be difficult for providers to have these conversations with patients without further education. Providing this education was beyond the scope of this initial pilot work. Fourth, there were some technical limitations in the way the notifications could be delivered to the clinical teams. Future work will examine better integration of mHOMR notifications into the EHR and workflow. Finally, we relied solely on the medical record as the source of data for early GoC discussions and PC consultations—some of these interventions may have been undocumented, although this is unlikely to have biased our findings. We did not measure the accuracy of mHOMR for predicting mortality prospectively, since mHOMR was generated and validated in a database with almost 10 000 times the number of admissions in our study.
We found that the mHOMR model was feasible as a tool for identification of patients at elevated risk of death in the next year and may possibly be effective for triggering PEOLC interventions when integrated into existing communication systems on the medical ward. The model relied on data commonly collected in Canadian hospitals making it relatively simple to implement across the country and potentially in other jurisdictions. Ultimately, our mHOMR tool is not intended to function in isolation but rather as an accurate, reliable and automated trigger for specific PEOLC interventions such as symptom management, GoC discussion, deprescribing or deintensification of treatment. Future studies will explore the effectiveness of the tool in this role by linking mHOMR to proven interventions, helping to ensure that the right care is delivered to patients requiring end of life care.
Contributors JD, SA, DMK and CvW conceived the study and developed the protocol. PW and JD led the drafting of the manuscript. All authors contributed to data collection and/or analysis and interpretation, revising the manuscript and approved the final version submitted for publication.
Funding This research is funded by Canadian Frailty Network (Technology Evaluation in the Elderly Network), which is supported by the Government of Canada through the Networks of Centres of Excellence (NCE) programme. This project was also supported financially by the Temmy Latner Centre for Palliative Care and the Toronto General/Toronto Western Foundation, and received in-kind support from the Ottawa Hospital Research Institute. JD received support for this project from the Associated Medical Services, Incorporated through a Phoenix Fellowship.
Competing interests None declared.
Patient consent for publication Not required.
Provenance and peer review Not commissioned; externally peer reviewed.
Data availability statement No data are available.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.