Background Hospitals have been slow to adopt guidelines from the American Heart Association (AHA) limiting the use of continuous cardiac monitoring for fear of missing important patient cardiac events. A new continuous cardiac monitoring policy was implemented at a tertiary-care hospital seeking to monitor only those patients who were clinically indicated and decrease the number of false alarms in order to improve overall alarm response.
Methods Leadership support was secured, a cross-functional alarm management task force was created, and a system-wide policy was developed based on current AHA guidelines. Process measures, including cardiac monitoring rate, monitored transport rate, emergency department (ED) boarding rate and the percentage of false, unnecessary and true alarms, were measured to determine the policy's impact on patient care. Outcome measures, including length of stay and mortality rate, were measured to determine the impact on patient outcomes.
Results Cardiac monitoring rate decreased 53.2% (0.535 to 0.251 per patient day, p<0.001), monitored transport rate decreased 15.5% (0.216 to 0.182 per patient day, p<0.001), ED patient boarding rate decreased 36.6% (5.5% to 3.5% of ED patients, p<0.001) and the percentage of false alarms decreased (18.8% to 9.6%, p<0.001). Neither the length of stay nor mortality changed significantly after the policy was implemented.
Conclusions The observed improvements in process measures coupled with no adverse effects to patient outcomes suggest that the overall system became more resilient to current and emerging demands. This study indicates that when collaboration across a diverse team is coupled with strong leadership support, policies and procedures such as this one can improve clinical practice and patient care.
- Human factors
- Implementation science
- Healthcare quality improvement
Statistics from Altmetric.com
There is strong evidential support for removing low-risk cardiac patients from continuous cardiac monitoring, but hospitals have been hesitant to implement large-scale policy changes restricting its use. Dysrhythmias requiring physician intervention are exceedingly rare in acute care patient populations, and overuse of telemetry monitoring contributes to overburdening of clinicians, increased costs for the hospital, increased legal liability and overcrowding in emergency departments (EDs) due to patients waiting for inpatient telemetry beds.1 This overuse of monitoring for lower-acuity patients also results in monitoring for lower-likelihood events. This has been shown mathematically to increase false alarm rates and reduce positive predictive value,2 thereby increasing the likelihood that clinicians will disregard telemetry alarms altogether.3 However, implementing specific interventions has been difficult due to the prevailing opinion of clinicians that increased monitoring improves clinical outcomes and reduces legal liability.1 Even though there has recently been increased pressure to improve alarm management policies and procedures from the Emergency Care Research Institute (ECRI) Institute,4 the Society for Hospital Medicine5 and The Joint Commission,6 there are few institutions that have demonstrated success in implementing policies that have reduced monitoring for these low-risk patients.
We sought to determine whether our plan to develop and communicate a new continuous cardiac monitoring policy would result in improvements to patient care processes, ultimately leading to improved patient outcomes. The goal of the policy was to order continuous cardiac monitoring only for patients who would likely benefit from it, thereby freeing up hospital resources and reducing the risk of disregarding alarms without increasing the risk of patient harm. The policy was based on Practice Standards for Electrocardiographic Monitoring in Hospital Settings, published in 2004 by the American Heart Association (AHA). These standards stratify patients into three different risk categories: class I, in which most to all patients would benefit from cardiac monitoring, class II, in which some patients may benefit, and class III, in which cardiac monitoring is not indicated.7 To implement this policy, we secured leadership support and created a multidisciplinary alarm management task force. We used the Practice Standards as our foundation, and then tailored it to the specific needs of our institution.
We conducted a retrospective analysis of institutional data and direct observations of three inpatient units to understand the impacts of this intervention. The retrospective data analysis measured weekly averages of process and outcome measures to evaluate the effectiveness and safety of the policy. The process measures were cardiac monitoring rate, monitored transport rate and the ED boarding rate. The outcome measures were length of stay (LOS) and mortality. The data collected in the direct observations were used to calculate the percentage of true, false and unnecessary alarms. These were collected because they have been directly correlated with decreased alarm response rate and increased response time.8–12 Using these methods together allowed us to understand the direct impacts of this policy and give insights on other process and outcome measures that may have been impacted.
Setting and background
The new policy and accompanying technology changes were implemented at a Midwest tertiary care health system. The intervention was implemented in five hospitals, affecting 37 medical/surgical, cardiac, critical care and hybrid units containing over 1000 patient beds capable of continuous cardiac monitoring. In response to The Joint Commission's National Patient Safety Goal 6 for 2014, the Chief Quality and Patient Safety Officer and the Chief Nursing Officer for the health system's two largest hospitals commissioned the formation of a patient alarms task force. The task force was comprised of physicians and nurses throughout the hospital as well as subject matter experts in information technology (IT), human factors engineering, risk management and data analysis.
Creation of subcommittees
The task force was divided into six subcommittees: Executive Steering, Physiological Monitoring Oversight, Platform, Training and Implementation, and Monitoring and Evaluation. The Executive Steering committee was responsible for approving all recommendations from other subcommittees, overseeing the development of the new cardiac monitoring policy and communicating to hospital leadership. Its membership included the Chief Quality and Patient Safety Officer, the Chief and Associate Chief Nursing Officers, physician champions and the leaders from all other subcommittees. The Physiological Monitoring Oversight committee was responsible for developing interventions involving new protocols, processes and technology configurations to increase responsiveness to clinical alarms. Its membership included nurses, physicians, risk management, IT and human factors engineering professionals. The Platform committee was responsible for implementing approved interventions and acting as liaisons to hospital security, facilities, clinical engineering, business continuity and support, and outside vendors. Its membership included IT and clinical engineering professionals. The Training and Implementation committee was responsible for communicating the new protocols, processes and technology configurations throughout the organisation, and training employees on any resultant changes. Its membership included nurses (including Clinical Nurse Specialists), risk management and vendors. The Monitoring and Evaluation committee was responsible for determining measures, collecting data and distributing scorecards to evaluate the effectiveness of task force interventions. Its membership included nurses, physicians, quality and patient safety, IT and human factors engineering professionals.
The first initiative of the task force was to develop, implement and communicate a new continuous cardiac monitoring policy aimed at better identifying those patients who would benefit from monitoring and how long they should be monitored. The primary aim of the policy was to change the default organisational culture with regard to continuous monitoring. Monitoring was often used not due to a specific concern but instead as an extra patient safety mechanism. Monitoring orders were continued unless a physician explicitly terminated them, with most lasting until the day of discharge. These orders were ordered by default in many physician order sets, and were therefore ordered implicitly. The new policy used the AHA's 2004 risk stratification7 to categorise patients into four risk levels. Critical patients would have continuous monitoring ordered automatically, and those orders would never expire. Continuous monitoring for class I, class II and class III patients would be manually ordered, and would expire after 72, 48 or 36 h, respectively.
Changes to the institution's electronic health record were made to complement the new policy. First, continuous cardiac monitoring orders were removed from all order sets except for those in critical care and oncology. Six new orders were created. Critical care orders had no expiration. Inpatient class I orders expired after 72 h. Inpatient class II orders expired after 48 h. Inpatient class III orders expired after 36 h. Two additional orders were also created that did not conform to the AHA's risk stratification categories. Chemotherapy orders expired based on the patient's treatment. ED orders expired after 6 h. Physicians would select one of these orders, whose title gave the type of order and the expiration (eg, ‘Class 1—Cardiac Monitoring—72 hrs’). For class I, II and III orders, they would then be required to give a reason for the order, selecting from a fixed list of patient characteristics that were appropriate for the chosen order, which are shown in table 1. When an order expired, the patient would be removed from monitoring unless the order was explicitly renewed.
Prior to policy implementation, nurse leadership on the task force designed and led educational sessions for nursing, while physician champions on the task force directed similar efforts for physicians. Communication with nurse and physician champions led to further discussion of the evidence-based policy and helped to ensure understanding of the purpose and goals of both the patient classes and accompanying physician order expiration guidelines. The policy was presented and approved by hospital-wide executive committees as well as the institution's board of directors. Widespread advertisement of the new initiative included presentations at departmental and division meetings, grand rounds and on-demand video presentations.
Retrospective data collection
To evaluate the effectiveness of the new policy, multiple process and outcome measures were used. The majority of these measures were calculated from data collected retrospectively from an institutional data warehouse for the 12-week periods before and after the intervention was implemented. These periods were selected to give sufficient time to understand the baseline behaviour of the system and mitigate the risk of outliers unduly biasing the results and also minimise the risk that other system changes before or after the implementation date confounded the results. The preimplementation period was from 30 September 2013 to 22 December 2013. The postimplementation period was from 23 December 2013 to 16 March 2014. The process measures included cardiac monitoring rate, monitored transport rate and ED boarding rate. Cardiac monitoring and monitored transport rates were chosen because they were the specific issues that the intervention sought to address. ED boarding rate was chosen because of the evidence in the literature that decreased monitoring was associated with decreased ED boarding.1 The outcome measures included average LOS and mortality indices. Cardiac monitoring rate was calculated by dividing the total number of days that cardiac monitoring was used by the total patient days for a given week. The number of monitored transports was defined as the number of times that a registered nurse was required to accompany a patient travelling off the unit because the patient was on cardiac monitoring. This was converted to a monitored rate by dividing by the total patient days for the week. ED boarded patients were defined as patients who were still waiting in the ED for an inpatient bed 4 h or more past the time of their admission decision. The ED boarding rate was calculated as the number of ED boarded patients per day divided by the ED volume for that day. Both the LOS and mortality indices were calculated by normalising the institution's data against benchmarks from the University Health System Consortium (UHC) matched on time period and patient condition.
The percentages of true, false and unnecessary alarms were collected by conducting six 2 h observations across three different units. Observations were randomised by day of the week and time of day, although weekend and night shifts were not observed due to the availability of the clinical researchers. Each of the three units was observed for 2 h periods before and after the intervention. The cardiac alarms included in these observations were those signifying asystole, ventricular fibrillation, ventricular tachycardia, ventricular bradycardia, tachycardia (measured by R-to-R distance), bradycardia (measured by R-to-R distance), high and low heart rate (measuring average in a time window), low SpO2, leads fail, no telemetry and telemetry low battery. These alarms were selected because they were used in all of the units across the hospital.
Each observation was conducted by one of two clinical researchers, both certified as a Clinical Nurse Specialist or higher. Their training for this study consisted of learning the definitions of true, false and unnecessary alarms,13 and reviewing a list of scenarios that would qualify as false or unnecessary. False alarms were those triggered by sensor noise or a system malfunction, meaning that the alarm signified an event that was not occurring. Some examples of false alarms included asystole alarms triggered by patient movement or low SpO2 alarms triggered by inconsistent connection to the patient. Both of these would likely be identified as false by reading the waveform data. Unnecessary alarms were those in which the alarm system worked as designed, but was signifying an event that was not clinically significant or required no additional intervention. One example of an unnecessary alarm was a low heart rate alarm triggered by a patient who had a lower than normal baseline resting heart rates. Another example was a leads off alarm that was triggered when the patient was disconnected from monitoring to be transported off-unit. These were usually identified by consulting the patient's chart or observing the patient's room. All other alarms were defined as true.
During the observations, the researcher sat in front of the central station monitors for the observed unit. For each observed alarm, the researcher noted time of day, the type of alarm (eg, SpO2, asystole and tachycardia), the patient's physiological value that triggered the alarm when available (eg, heart rate of 48 bpm for a low heart rate alarm) and whether the alarm was true, false or unnecessary. To make the final determination of false, unnecessary or true, the clinical researcher consulted the data provided by the monitoring system, the patient's chart, the bedside nurse and the patient as necessary.
After all observations were completed, the percentage of false alarms was defined as the number of false alarms observed divided by the total number of alarms observed. The same calculation was made for unnecessary and true alarms.
Both the retrospective data analysis and direct observation study arms were approved by The Ohio State University Institutional Review Board. No identifying information was collected to protect the privacy of the participants and their patients. No aspect of the observations or interviews was determined to be more than minimal risk.
T tests were used to determine statistical differences between the periods before and after the policy implementation for cardiac monitoring rate, monitored transport rate and ED boarding rate. Bonferroni corrections for multiple simultaneous comparisons were used to set the threshold for significance for these measures at 0.017. Statistical process control charts were also created for these measures to determine whether their underlying processes were stable after the policy implementation. This stability indicated the likelihood of whether observed changes would persist after the implementation period. Observations outside of the control limits were removed if influencing factors were present in those observations that were not common to the entire set.14 It was then determined if the remaining observations were in control. Individuals and Moving Range charts (I-MR) were created for the cardiac monitoring and monitored transport rate charts, with each point representing an individual weekly observation. A Mean and Range (Xbar-R) chart was created for ED boarding rate, with each point representing seven daily observations. All statistical process control charts were created with R V.3.2.1. The change in the distribution of true, false and unnecessary alarms was analysed using a χ2 test of association of the distribution with time period (pre vs post). Means and SD’s were calculated for all measures. 95% CIs of pre–post changes in measures were calculated. For subsequent analyses of secondary measures, including LOS index and mortality index, t tests at the 0.05 level were used to determine statistical differences. All statistics were calculated using SPSS V.21.
Retrospective data analysis
Differences before and after the policy implementation in cardiac monitoring rate, monitored transport rate, ED boarding rate, percentage of false cardiac alarms, percentage of unnecessary cardiac alarms, LOS index and mortality index are shown in table 2. When comparing the hospital-wide data before and after the implementation of the policy, average cardiac monitoring rate decreased 53.2% (0.535 to 0.251, p<0.001), weekly monitoring rate decreased 15.5% (0.216 to 0.182, p<0.001) and ED boarding rate decreased 36.6% (5.5% to 3.5%, p<0.001). In subsequent analyses, the changes in LOS index (0.99 to 0.98, p=0.42) and mortality index (0.68 to 0.65, p=0.80) were not statistically significant. Statistical process control charts for the preimplementation and postimplementation periods for weekly averages of cardiac monitoring, monitored transport and ED boarding rates are shown in figure 1. Cardiac monitoring rate was out of control both before and after the implementation, with the week of 10/6 being below the lower control limit before implementation, the weeks of 10/13 through 12/8 being above the centreline before implementation and the week of 12/29 being above the control limit after implementation. The week of 12/29 was removed because it was revealed that some units did not fully adopt the policy the first week. After this removal, the postimplementation process was in control. Monitored transport rate was in control before and after the implementation. ED boarding rate was in control before the implementation, but was out of control after the implementation. The weeks of 12/29 through 2/16 were all below the centreline, and the weeks of 2/23 and 3/16 were both above the upper control limit. The weeks of 2/23 through 3/16 appeared to be trending upward. When they were removed, weeks 13 through 20 were in control. The Moving Range and Range charts for these measures (which are not presented here) did not identify any additional out-of-control points or trends.
Three hundred and thirty-five alarms were observed during 12 h of observation. This was divided into 126 alarms during 6 h of observation preimplementation and 229 alarms during 6 h of observation postimplementation. Between these two periods, false alarm percentage decreased from 18.8% to 9.6%, (χ2(3)=23.20, p<0.001). The percentage of unnecessary alarms stayed consistent between the pre (46.2%) and post (46.7%) periods.
The results of this study suggest that the development and communication of this new policy safely reduced the length of time that patients spent on continuous cardiac monitoring. In addition, other indicators of overall system health improved in the 12-week period after the policy's implementation. Our findings are notable in a number of areas. First, the decrease in the cardiac monitoring and monitored transport rate without associated increases in mortality and LOS validate the AHA guidelines that predict that reducing the amount of continuous cardiac monitoring to only those patients who would likely benefit will not adversely affect overall patient outcomes. Second, the decreases in cardiac monitoring rate, monitored transport rate, ED boarding rate and false alarms can all be viewed as indications that the system increased its adaptive capacity, and therefore, its resilience, as each suggest that additional resources (ie, monitor beds, bedside nurses, ED beds and mental workload) became available to be either held in reserve or redistributed as new needs emerged.15 ,16 It is particularly notable as it suggests that the alarm response rate will increase and alarm response time will decrease proportionally to the new false alarm rate.8 The rate of cardiac monitoring and monitored transports, which are directly associated with the policy, remained in control for the entire postimplementation period. ED boarding rate, which is outside of the direct control of the policy, was out of control in the postimplementation period, with the out of control points coinciding with an apparent upward trend from 23 February 2014 to 16 March 2014. Further analysis revealed that this trend continued after the observation period was over, eventually returning to and surpassing preimplementation levels. Because the process was in control for the first 8 weeks after implementation, and it was determined that both the inpatient and ED census remained static across the 12-week postimplementation period, it is likely that additional needs of the hospital changed the balance of available inpatient resources relative to specific ED demand. This eventual absorption of surplus resources is perhaps best explained by the Law of Stretched Systems, which states that all systems seek to operate at full capacity, exploiting past improvements to achieve a new intensity or tempo of activity.17 Although it is difficult to know the extent to which the changes in ED boarding rate, false alarms, mortality and LOS changes were due to the change in policy, it is non-trivial that none of the process or outcome indicators declined, as process improvements are not always associated with improved outcomes.18
These results reflect the efforts to enact a new policy and redesign supporting technology, and also the substantial resources invested by the institution to ensure that clinicians understood and complied with the policy. Two key factors for the successful implementation of this initiative were the strong, unwavering leadership support and widespread engagement of staff representing numerous roles within the organisation. The Chief Quality and Patient Safety Officer, Chief Nursing Officer and a physician champion led the initiative, with participation from additional physician and nurse champions from many departments and disciplines. This initiative was identified as a high priority by the highest level of institutional leadership, including the Chief Executive Officer (CEO), Chief Financial Officer (CFO) and Chief Operating Officer (COO). This facilitated the engagement of IT, human factors engineering and risk management professionals, and provided access to data for decision-making. These diverse perspectives coupled with this available data allowed task force participants to make data-driven decisions, accelerating consensus when disagreements arose. Human factors engineers worked closely with clinicians and IT professionals from the beginning of the process, resulting in policy and technology solutions explicitly designed to optimise usability and mitigate the risk of increased workload and other unintended consequences sometimes associated with healthcare technology.19 Specifically, human factors engineering participation ensured that all proposed changes to the electronic medical record were evaluated either through a heuristic review or usability test. In addition, it was their guidance that led to explicitly measuring true, false and unnecessary alarms, which is the most reliable predictor of alarm response,20 but is often not performed because it can be time-consuming to collect. This collaboration continued well past the implementation date by keeping specific subcommittees and the executive steering committee intact to continue monitoring the progress and success of this and other alarm task force initiatives, making sure that the benefits were not short-lived, and that the organisation did not revert to previous behaviours.
This study has several limitations. First, we only studied continuous cardiac monitoring policy implementation at one Midwest tertiary-care academic institution, so our findings may not generalise to all hospitals or other monitor alarm systems. However, we believe that the organisational dynamics and patient care needs of other hospitals are similar enough that they would benefit from a similar approach. In general, as stated above, the extent to which this intervention caused the decreases in number of boarded patients and percentage of false alarms is unclear. Because this was a pre–post study without a control, other factors may have contributed to the observed changes. Other safety interventions including improved lead preparation, lead maintenance and alarm setting personalisation likely affected false alarm rates. Other safety initiatives and hospital demands may have affected ED boarding, mortality and LOS. Additional studies focusing on potential mechanisms linking the policy and these other safety interventions to the reported measures would further our understanding. Also, the inability to observe nights and weekends increases the risk that the false and unnecessary alarm results will not generalise to these shifts. Finally, we were only able to conduct a small number of direct observations. As such, it is unclear how representative our observation data was and how well it would generalise.
In conclusion, this new policy and resultant practices, derived from evidence-based guidelines,7 were successful in targeting the use of continuous cardiac monitoring to those patients who needed it most without compromising care for those patients for which continuous monitoring was contraindicated. By focusing our attention to those needing monitoring, we suggest that remaining resources will be better served and diverted to other patient care processes, ultimately making the overall system more resilient to current and emerging needs. The steps taken by this institution to secure leadership buy-in and include a diverse set of stakeholders illustrate how critical both of these strategies are to implementing large hospital-wide policies to improve clinical practice and patient care.
Twitter Follow Michael Rayo at @hepcatrayo
Acknowledgements The authors would like to thank all the members of the patient alarm task force at The Ohio State University Wexner Medical Center for their work on crafting and communicating this policy institution-wide. We would especially like to thank Susan Bejciy-Spring who gathered the research that formed the foundation of the policy. Last, we would like to thank the numerous clinicians and administrators who, although not formally on the task force, participated in the observations of the units, gathered relevant operational data and helped us understand the detailed realities of the work being done at our institution.
Contributors Study design: MFR, JM, DE, TM, SDM-B; data collection: MFR JM, TM; Data Analysis: MFR, SW; Critical revision of manuscript: MFR, JM, DE, SDM-B, SW.
Competing interests None declared.
Ethics approval This study was conducted with the approval of The Ohio State University Institutional Review Board.
Provenance and peer review Not commissioned; externally peer reviewed.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.