Background Timely and reliable communication of critical laboratory values is a Joint Commission National Patient Safety Goal. The objective was to evaluate the effect of an automated system for paging critical values directly to the responsible physician.
Methods A randomised controlled trial on the general medicine clinical teaching units at an urban academic hospital was conducted from February to May 2006; the unit of randomisation was the critical laboratory value. The intervention was an automated paging system that sent the critical value directly to the responsible physician's pager. The control arm was usual care, which was a telephone call to the patient's ward by the laboratory technician. The primary outcome was response time, defined as the interval between acceptance of the critical value into the laboratory information system to the writing of an order on the patient's chart in response to the critical value. If the time of order was not documented, the time of administration of treatment was used to calculate response time.
Results For primary analysis, 165 critical values were evaluated on 108 patients with full response time data. The median response time was 16 min (IQR 2–141) for the automated paging group and 39.5 min (IQR 7–104.5) for the usual care group (p=0.33).
Conclusions The automated paging system reduced the length of time physicians took to respond to critical laboratory values, but this difference was not statistically significant. Future reseach should evaluate the effects of alerts for conditions that currently do not generate a phone call and the addition of real-time decision support to the critical value alerts.
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Hospitalised patients with critical laboratory abnormalities are at increased risk of adverse events and there are often gaps in their clinical care. For example, one study found that patients with hypokalemia (serum potassium less than 3 mmol/l) are 2.5 times more likely to die in hospital, yet 24% of these patients received inadequate management of their hypokalemia.1 An important step in management of critical laboratory values is communication of the critical value to the responsible provider. Timely and reliable communication of critical laboratory values is a Joint Commission National Patient Safety Goal,2 but current practices for communicating critical values are highly variable. US surveys show that only 12% of critical value telephone calls are made to the responsible physician and 0.7–5.5% of critical laboratory values are never called to anyone outside of the laboratory.3 4
Automatic paging of critical values directly to the responsible physician may be a useful method to address gaps in the communication process. Although there are descriptive studies of automated paging systems,5 there are few controlled evaluations. In one randomised trial, an automated paging system reduced median physician response time from 96 to 60 min.6 However, automated paging systems may have unintended downsides, such as disrupting usual lines of communication, providing excessive information, or undermining team functioning.7 The importance of timely response to critical laboratory values, the limited knowledge of the benefits of automated paging on response time and the potential for unintended downsides or unrecognised system complexities,8 indicate the need for evaluative studies of automated paging systems. If automated paging systems reduce response time, then further studies of the impact on time to resolution of the critical laboratory value and on clinical outcomes would be warranted. Our primary objective was to evaluate the effect of an automated paging system for critical laboratory values on physician response time in a randomised controlled trial.
We conducted a randomised controlled trial of critical laboratory values for patients admitted to the four general medicine clinical teaching units at Sunnybrook Health Sciences Centre. We studied critical laboratory values (table 1) for which automated paging might improve the response time. We excluded critical troponin values, because in a pilot study we found that the majority of critical troponin values led to no clinical response. We also excluded critical values that arose while the patient was under the care of an emergency department or critical care physician. The intervention was an automated real-time system for sending critical laboratory values directly from the laboratory information system to an alphanumeric pager carried by the responsible housestaff physician. The automated paging system (New Age Systems Inc., Evansville, Illinois, USA) accepts laboratory values directly from the laboratory information system, detects critically abnormal results, then sends a notification to a dedicated alphanumeric pager carried by the responsible housestaff physician. The pager was carried by the senior resident on weekdays and by the on call resident on nights and weekends.
All laboratories in Ontario are required to define critical values and implement a process for immediate notification of responsible personnel.9 The existing process for communicating critical laboratory values begins with the laboratory technician. First, the technician verifies the accuracy of the test result. The result is then accepted in the laboratory information system and is automatically transferred to the electronic patient record in real-time through an HL7 interface. The technician also telephones the patient's ward with the critical laboratory value. The phone call may be received by a ward clerk or other clinical staff. The result is communicated to the patient's nurse, who then pages the responsible physician. This existing process is mandated by national standards and continued for all critical values throughout the study.
Our primary objective was to evaluate the effect of an automated paging system for critical laboratory values on physician response time, defined as the interval between the acceptance of the critical value into the laboratory information system to the writing of an order on the patient's chart in response to the critical value. A research nurse identified all critical values by reviewing electronic records of all eligible patients. Our electronic patient record displays laboratory results, but medication orders and treatments are handwritten. For each critical value, the nurse reviewed the written medical record and recorded all orders and treatments around the time of the critical value. A panel of at least three study physicians reviewed data for each critical value. First, the panel excluded critical values that were non-actionable. For example, a critical low serum potassium level would be judged non-actionable if the patient was already on treatment for hypokalemia and the serum potassium level was improving compared with prior values. Reviewers then determined the orders and treatments administered in response to the critical value, the time of order and the time of administration of the treatment, based on the information taken directly from the chart by the study nurse.
In our pilot study, we found that physicians did not consistently document times in their orders. Throughout the study, we gave weekly feedback to our medical teams on the proportion of critical value orders that had a documented time of order. We gave a book prize and coffee shop gift certificates to the team with the best performance each month. If the time of order was not documented, we used the time of administration of treatment to calculate response time.
The unit of analysis was the critical laboratory value. In the pilot study, we found that the mean response time given to usual care processes was 120 min. We chose a minimum clinically important difference in mean response time of 60 min, so the required study sample was 154 alerts (two-tailed α of 0.05 and a power of 0.8).
For the study, the software used a computer-generated randomisation scheme to randomly assign each critically abnormal result to an automated page or no automated page. The study nurse and physician reviewers were unaware of the group allocation for the critical values.
Our final results were highly skewed, so we compared median response times for the two groups using a two-sample, two-sided Wilcoxon rank sum test. We also used the Hodges-Lehmann estimate of shift to calculate the median difference in response time (with 95% confidence interval (CIs)) between the automated paging and usual care group. We compared other study group characteristics with χ2 tests for categorical variables and t tests for continuous variables. We used multiple imputation to deal with critical values with missing response times. We created five imputed data sets that were analysed individually. The final reported result for the imputations is an average of the five individual runs.
The Sunnybrook Research Ethics Board approved the trial protocol. The study investigators retained complete control over study design, data collection, data analysis and interpretation and manuscript preparation.
There were 396 critical values on 162 patients from February to May 2006. We prospectively excluded 85 critical values for the following reasons: critical troponin value (62), patient under care of critical care physician (16) and patient under care of emergency department physician (7). We judged another 79 alerts to be non-actionable. Finally, six patients had two simultaneous (highly correlated) critical values (eg, a critically low haemoglobin and a critically low serum potasisum reported at the same time). We only analysed the response time for one critical value from each pair (results were unchanged when we repeated the analysis with the excluded values, data not shown). This left 226 critical values on 125 patients. Sixty-one critical values had neither a documented time of physician order nor a documented time of treatment administration, so a response time could not be measured. We compared the 61 critical values with no response time with the 165 critical values with a response time. Critical values without a response time were more likely to be for low serum sodium (20% vs 10%, p=0.04) or low haemoglobin (23% vs. 10%, p=0.01) and less likely to be for low serum potassium (38% vs 67%, p=0.007).
For our primary analysis, we evaluated the 165 critical values (table 1) on 108 patients (table 2) with a measurable response time. The distribution of critical values between the experimental and control arms was similar (table 1) and patients had similar distribution of age, sex, serum creatinine on admission, presence of a do-not-resuscitate order on the chart and length of stay at time of critical value (table 2). The median response time was 16 min (interquartile range (IQR) 2–141 min) for the automated paging group and 39.5 min (IQR 7–104.5 min) for the usual care group (two-tailed Wilcoxon rank sum test: p=0.33, t=0.97, N=165). The median difference in response time was 5 min lower in the automated paging group (95% CI 20 min lower in the auomated paging group to 6 min higher in the automated paging group). There was no difference in proportion of response times 2 h or longer (21% in the paging group, 28% in the usual care group, difference 7% lower, 95% CI 20% lower to 6% higher, p=0.30).
We did a secondary analysis of the 141 critical values with a documented time of order, excluding the 24 critical values that only had a documented time of treatment administration. For this secondary analysis, the median response time was 12 min (IQR 1–124 min) for the automated paging group and 36 min (IQR 5–97 min) for the usual care group (two-tailed Wilcoxon rank sum test: p=0.20, t=1.27, N=141) (table 3). We also conducted a secondary analysis using multiple imputation for the 61 critical values with no response time data. In this analysis of 226 critical values, the median response time in the automatic paging group was 30 min (IQR 2–155 min), whereas the median response time in the usual care group was 43 min (IQR 5–132 min) (two-tailed Wilcoxon rank sum test p=0.67, t=0.44, N=226).
We found a 23-min reduction in median response time to critical laboratory values with automated paging, but this difference was not statistically significant. Our results contrast with a prior study that found a significant reduction in median response time from 96 to 60 min with an automated alerting system.6 In our study, all the critical values were telephoned from the laboratory to the ward, whereas in the prior study, only 50% of alerts met criteria for a telephone notification. We would expect a greater difference between automated paging and usual care in the absence of a telephone notification system. The median response time in our usual care group was 39.5 min, whereas prior studies have found median response times after telephone notification of critical values to be 66–213 min.6 10 11 Therefore, there was less room to improve the response time in our usual care group.
Our study has some limitations. First, we were unable to measure a response time for 26% of eligible critical values. However, our analyses using imputed data for the missing values did not change our conclusions. Computerised medication order entry systems and medication administration records would facilitate the collection of response time data in future studies. Second, the usual care group response time was unexpectedly fast, possibly because we had to encourage our residents to document their response times throughout the study. In typical practice, we expect that usual care response times would be longer. Third, we did not evaluate the reliability of our study nurse's data abstraction or our study physicians' judgements. Laboratory result times and physician ordering times are unambiguously recorded in our charts, so we expect the reliability of nurse data abstraction would be high. Fourth, the clinical importance of a 23-min reduction in response time to critical values is uncertain. A prior study of automated alerts found that a statistically significant 36-min reduction in time to order was not associated with differences in time to resolution of the laboratory abnormality, adverse events, length of stay or mortality.6
We also experienced some unintended downsides of the automated paging system. First, in our pilot phase, some critical values were viewed as nuisances, such as repeated critical troponin values. Second, our physician schedules were not fully automated, so it was impossible to route a critical value alert to a specific physician. Instead, we had to route each alert to a dedicated critical value pager, which was handed off between physicians. Our physicians on call had to carry numerous additional pagers to receive the critical value alerts, and at times they were unsure which of the many pagers was alerting. We have started a separate project to fully automate the physician scheduling system, so that the alerts can be directly routed to the responsible provider's pager 24 h a day, without the need for additional dedicated critical laboratory pagers. Automated physician scheduling is essential for optimal performance of a critical value paging system.
Our results suggest that the full value of automated paging systems lies beyond simple notification of critical laboratory values. In an ongoing study, we are designing alerts for conditions that currently do not generate a telephone call, such as sudden changes in laboratory values, or hazardous laboratory-drug combinations.12 In addition, we are attaching real-time decision support to the critical value alerts, so that clinicians have current hospital guidelines and protocols immediately at hand when dealing with critical values.
This manuscript is dedicated to the memory of Dr William Sibbald. We are indebted to the residents who used the automated paging system and provided their feedback. We thank Karen Tabor for recording the response times and conducting the chart reviews. We are grateful to our students, Kimberly Luu, Aliya Noormohamed and Tarrick Buttu, for their help with data management. We thank New Age Systems Inc for their support of the study.
Funding This study was partially funded by the University of Toronto Department of Medicine Quality Partners Program.
Competing interests New Age provided their system and services at a reduced cost, but had no role in study design, data collection, data analysis and interpretation and manuscript preparation. The rest of the study was funded by the University of Toronto Department of Medicine Quality Partners Program. The study investigators retained complete control over study design, data collection, data analysis and interpretation and manuscript preparation. We provided New Age with confidential copies of submitted abstracts and manuscripts. Dr Etchells presented preliminary results at the 2006 Healthcare Information and Management Systems Society conference; New Age paid for Dr Etchells' return economy airfare and hotel room (total value ∼C$1000).