Article Text

Download PDFPDF

Effectiveness of measures to reduce emergency department waiting times: a natural experiment
  1. J Munro,
  2. S Mason,
  3. J Nicholl
  1. Medical Care Research Unit, University of Sheffield, Sheffield, UK
  1. Correspondence to:
 J Munro
 Medical Care Research Unit, School of Health and Related Research, University of Sheffield, Regent Court, 30 Regent St, Sheffield, S1 4DA, UK; j.f.munro{at}sheffield.ac.uk

Abstract

Objectives: To determine what measures were introduced by emergency departments in response to the national monitoring week in March 2003, and which, if any, of these were most effective in reducing waiting times.

Methods: A postal survey of all emergency departments in England was undertaken to gather data on measures taken. Department waiting times before, during, and after monitoring week were determined from data held by the Department of Health and linked to the survey data for analysis.

Results: A total of 111/198 responses (56%) were received. Departments had taken a wide range of measures to improve waiting times. The commonest were additional senior doctor hours (39%), creation of a “four hour monitor” role (37%), improved access to emergency beds (36%), additional non-clinical staff hours (33%), additional junior doctor hours (32%), additional nursing hours (29%), and triage by senior staff (28%). In 35 departments (32%) no changes were made at all to usual practice. The biggest influence on improved performance during monitoring week was the number of measures that a department took, rather than any specific measure, although there was weak evidence that additional junior medical and non-clinical staff time may have contributed more than other measures.

Conclusions: Improved waiting time performance may depend, at least in the short term, more on the amount of effort expended than on introducing a single effective change. In addition, those measures most likely to be helpful are likely also to require additional resources.

  • emergency departments
  • waiting times
  • service performance
  • targets

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Waiting times in hospital emergency departments have been rising for many years.1 In The NHS Plan, the government committed itself to a target of ensuring that “by 2004, no-one should be waiting more than four hours in accident and emergency from arrival to admission, transfer or discharge”.2 In recent years, emergency departments have been asked to achieve this target for 90% of patients, rising to 98% of patients by January 2005.3 However, it remains unclear how waiting times might best be reduced.4

As a part of encouraging and assessing progress towards the target, the Department of Health required emergency departments to achieve four hour department times for 90% of patients for a single week in March 2003 (“monitoring week”). Predictably, as was widely reported at the time, many hospitals made strenuous efforts to meet this target by allocating additional staff or other resources to emergency departments, changing emergency patient management, or in other ways.5 It seems likely that some of the changes made by departments will have been particularly helpful in reducing waiting times, and others less so.

We took advantage of this “natural experiment” to determine which, if any, of the measures taken by departments had been the most effective in achieving the target.

METHODS

In September 2003 we sent a postal questionnaire to the clinical leads of 198 type 1 emergency departments in England asking about any changes that they had introduced for monitoring week, including changes in staffing and physical resources, changes in patient management (such as registration, triage, discharge, or admission), and changes in external support (such as diagnostic services, admission teams, access to beds or other services). We also sought their views on the effect of the week on staff morale. We sent up to two reminders at fortnightly intervals. The survey was piloted in 12 departments prior to use.

We obtained routine data on emergency department waiting time performance from the Department of Health for three distinct weeks before, during, and after monitoring week (weeks beginning 20 January, 24 March, and 12 May 2003, respectively),6 which included information on the proportion of attenders waiting over four hours. We selected before and after weeks with sufficient time separation from monitoring week to avoid “spill over” effects, while avoiding public holidays and junior staff changeover dates. We matched these data to those from our survey so that, for each department responding to our survey, both the measures it took during monitoring week and its waiting time performance before, during, and after the week were available.

However, because the routine data are collected at the level of National Health Service (NHS) trusts rather than hospitals, waiting time performance could not be determined for individual departments in trusts with multiple emergency departments. These trusts were therefore excluded from the analysis. In addition, we excluded those trusts that had undergone merger within the study period.

The effect of the measures taken by departments was estimated by fitting binomial models to the logit of the proportion waiting more than four hours during monitoring week, using generalised linear interactive modelling (GLIM).7 The proportion waiting over four hours in the “before” week was included as a covariate. We first examined whether taking some measures made a difference compared with taking no measures, and allowing for performance in the baseline week. After including the number of measures each department took as an additional explanatory factor, we examined the effect of the measures individually.

We coded free text responses to our question on the effect of the monitoring week on staff morale according to whether the respondent described morale as increasing, decreasing, or unchanging. We noted the range of issues raised by respondents, but we did not perform formal qualitative analysis of the free text comments.

RESULTS

Sources of data

We compiled a list of 198 emergency departments in England for our survey. Waiting time data were supplied by the Department of Health for 159 English NHS trusts, which we were able to match to 126 of the departments in our list, after excluding trusts which had merged or had multiple departments.

Waiting time performance during monitoring week

Overall, waiting times improved during monitoring week. Across all 159 trusts for which the Department of Health had supplied data, the mean proportion of attenders dealt with within four hours rose from 81% beforehand to 93% during monitoring week, falling to 89% afterwards. This improved performance was achieved despite a reported increase in the mean number of attenders from 1345 beforehand to 1519 during monitoring week.

However, improvement was not uniform across all trusts. In 6 of the 159 trusts the proportion of attenders dealt with within four hours fell during monitoring week (compared with the baseline week), and in 34 it rose by less than five percentage points whereas in the remaining 119 it rose by more than five percentage points.

Response to postal survey

We received 111 responses to our survey (111/198, 56%). Departments ranged in size from approximately 26 000 to 134 000 new attendances per year (mean 59 000). We found no evidence suggesting that responding and non-responding departments differed in activity during monitoring week (t = 1.2, df = 122, p = 0.23).

Measures taken by departments

Departments reported a wide range of measures to improve waiting times (table 1). Of these, the commonest were additional senior doctor hours (39% of respondents), creation of a “four hour monitor” role (37%), improved access to emergency beds (36%), additional non-clinical staff hours (33%), additional junior doctor hours (32%), additional nursing hours (29%) and triage by senior staff (see and treat) (28%). However, 35 departments (32%) made no changes at all to their usual practice.

Table 1

 Measures introduced by emergency departments for monitoring week (total respondents = 111)

Of departments that added consultant time, the mean weekly addition was 18.5 hours. Similarly, of those that added specialist registrar time, the mean addition was 32.3 hours, of those that added senior house officer time the mean addition was 48.6 hours, and of those that added other medical time the mean addition was 28.0 hours per week. Among departments that added non-clinical staff resources, such as reception, managerial or portering time, the mean weekly addition was 51.6 hours per week.

Effectiveness of the measures taken

We matched waiting time performance data to 72 of the 111 departments responding to our survey. Mean performance among these departments before, during, and after monitoring week was identical to that described above for England as a whole. Table 2 shows the mean proportion of patients waiting over four hours in these 72 departments before, during, and after monitoring week, and the associated short term (baseline to monitoring week) and medium term (baseline to follow up week) changes in these proportions. Among these departments, 52 made at least one change and 20 made no change. The table shows that those departments that made changes were, on average, initially performing less well but made greater improvements than those that did not, and despite starting from a worse position ended with a better performance during the monitoring week but not during the follow up week. However, it is notable that even departments claiming to have made no changes for monitoring week reduced the proportion of patients waiting over four hours by an average of 5%.

Table 2

 Changes in proportion of patients waiting over four hours in departments that had introduced (different) measures

When the proportion of attenders waiting over four hours in the baseline week was taken into account, the biggest influence on performance during monitoring week was the number of measures that the department took rather than any specific measure (χ2 = 8.2, df = 3, p = 0.04). The association between the number of measures taken and improvement in performance is shown in fig 1.

Figure 1

 Effect of number of measures taken on the change in the proportion of patients waiting less than four hours.

When both baseline performance and the number of measures taken were included in the analysis, there was little evidence that the specific type of measure introduced influenced performance. In fact, the only change for which there was any evidence of a special effect was additional junior doctor hours, which was estimated to have reduced the odds of waiting over four hours by 0.67 (95% CI 0.42 to 1.09), and introducing additional non-clinical staff time (0.64, 95% CI 0.36 to 1.15). Neither of these effects was statistically significant.

Waiting time performance after monitoring week

Following monitoring week, waiting time performance across all English trusts was maintained or improved further in 42 trusts and fell in 117 trusts (comparing the follow up week with monitoring week). Overall, on comparing performance after monitoring week with that beforehand, there was a net improvement in 144 trusts and a net deterioration in 15 trusts.

The majority of the measures taken by departments to meet the monitoring week target were not sustained beyond it. Departments were more likely to sustain measures involving organisational or staff role changes than to sustain increases in staff time or physical capacity, presumably because the former required no additional resources (table 1). Ironically, therefore, the measures which might be most likely to be effective were also least likely to be sustained. Only five departments (5/36, 14%) maintained the increase in junior doctor time beyond monitoring week and seven (7/37, 19%) the increase in non-clinical staff time.

Views of respondents

A total of 68 respondents commented on the effect of monitoring week on morale in their department. Of these, 42 (62%) believed morale had improved, 17 (25%) believed it had worsened, and 9 (13%) reported no change. With regard to the overall view of monitoring week, 33 respondents (30%) offered some comments. Some found the experience useful, but almost a third felt that the experience had created an artificial situation which politicised the problems experienced by many departments.

We asked about changes not specifically addressed in the questionnaire. Among the 44 comments received, most emphasised the increased clinical staff presence and general trust presence and awareness as contributing to the change in performance. A total of 75 respondents offered a view on the most important factor leading to improvement (box 1), of which those cited most commonly were the increased awareness throughout the trusts and a change in attitude towards emergency workload. Others highlighted the value of increased staffing levels in the emergency department and improved access to beds.

Box 1: What was the most important factor that contributed to improved performance?

Some respondents’ views

  • Higher levels of staffing, oiling of machinery of moving patients, more senior staff of all disciplines in evidence

  • Managers, in particular, worked hard to push patients through system. Not sustained

  • Hospital-wide knowledge of importance of emergency patients

  • Reduction in elective surgery meant that the “direct” medical and surgery referrals went direct to the medical assessment unit (MAU) or a ward opened

  • Enhanced availability of senior medical/nursing staff, increased cooperation from specialties and high level of management intervention and problem solving

  • Staff elsewhere the hospital responded more efficiently to the needs of accident and emergency (A&E) patients requiring admission

  • Trust-wide awareness of target and its importance drove other departments within the trust to be more efficient and respond to A&E’s needs more urgently

  • Availability of beds. More staff in the department especially senior doctor increase. Management interest, whole systems approach

  • Availability of senior medical and nursing staff which was achieved by providing additional sessions

  • Availability of beds, more staff in A&E, easy access to diagnostics

  • Very hard work by management, medical nursing and supporting staff within A&E and all the trust. Extra staff employed

DISCUSSION

Nationally, emergency department waiting times improved during monitoring week, presumably as a result of the additional resources and other measures introduced by about two thirds of departments. Our analysis suggests that the most important factor improving performance was the number of measures taken, rather than the effect of any particular individual measure. However, we found weak evidence that additional junior medical and non-clinical staff time may have contributed more to improved performance than other measures. Our findings on the response of departments to monitoring week are consistent with a survey undertaken by the British Medical Association at the time, which also found that two thirds of departments “put in place special arrangements” to meet the target.8

The increasing benefits associated with a greater number of measures may simply be the result of the additional direct effects of these measures, or may reflect the degree of “effort” or “commitment” expended by departments and their trusts in improving their performance. The current study is unable to distinguish between these possibilities, though it might be noted that the comments of respondents support the idea that “very hard work” was an important factor in improved performance. This may also be part of the explanation for why departments which said they made no specific changes for monitoring week were still able to improve their waiting time performance.

Evidence on the impact of increasing staff resources on waiting times is hard to find. We have been unable to identify any studies in emergency medicine that have specifically examined the impact of increasing the number of junior medical staff or non-clinical staff on emergency department waiting times. However, the finding that increasing junior medical staff time may be effective in reducing waiting times is unsurprising, and generally consistent with modelling studies.9,10 It is not clear why our data failed to show a performance benefit from increasing nursing or senior doctor input. It is possible that these staff groups had fewer patient consultations compared with those of junior medical staff, that the size of the increase in their time was too low to have any effect, or that the current study was simply too small to show it. However, there may be other explanations and this is an area worthy of further investigation.

Our study has a number of limitations. We achieved a 56% response rate to our survey, which raises the possibility of response bias. Although departments may have been more likely to respond if they had made more effort, or been more successful, in meeting the waiting time target, we found that the performance of responders did not differ from that of non-responders. In addition, although response bias may have affected our estimates of the prevalence of measures taken, it is unlikely to have had any important effect on our estimates of the effectiveness of those measures. We might have achieved a higher response rate and more complete data if our survey had been carried out immediately after the monitoring week, and this would certainly have been helpful in reducing the uncertainty in our estimates of effect. Our outcome data on waiting time performance was derived from figures reported weekly by emergency departments to the Department of Health. We did not attempt to independently verify these data, although we recognise that there is the possibility that it may contain errors. Again, although this may have influenced our overall assessment of how waiting times changed during monitoring week, we think it unlikely that any inaccuracies will have led to spurious relationships between measures taken and eventual performance. Our survey addressed only those measures (mainly within the emergency department) which we felt would be directly known to senior emergency clinicians. Thus, we did not take into account factors outside the emergency department, such as hospital bed occupancy or the management of elective admissions, which may also have been important in determining waiting time performance, and may in part have been responsible for the improved performance seen in departments taking no specific measures. In addition, our study does not include any information on the costs of the measures taken by departments, although this is clearly important.

Although the 2003 monitoring week received some adverse media comment, studies such as this indicate the potential for new knowledge to be generated from the “natural experiments” which policy makers may inadvertently provoke. Similar research opportunities offered by future initiatives should be anticipated so that studies can be designed and carried out as close in time to the “natural experiment” as possible.

Since the period examined by this study, emergency departments have been required to meet more stringent waiting time targets.3 The results reported here may be helpful in determining how best to do this, but they present two clear difficulties. Firstly, those measures which may prove to be most helpful are likely to require additional resources. Secondly, even if funding were immediately available there may still prove to be significant difficulties in recruiting and retaining the appropriately skilled staff which emergency departments need.11

Acknowledgments

We are grateful to the British Association of Emergency Medicine, the Faculty of Accident and Emergency Medicine, and Dr Matthew Cooke for their valuable support for this study.

CONTRIBUTORS
 J Munro conceived the study, collected the data and wrote the paper, and is the guarantor of the study. S Mason contributed to the design and analysis of the study and to writing the paper. J Nicholl analysed the data and contributed to designing the study and writing the paper.

REFERENCES

Footnotes

  • The study was supported by the core funding of the Medical Care Research Unit. The Department of Health had no involvement with or influence on any aspect of this study.

  • Competing interests: none declared

  • The Medical Care Research Unit receives core funding from the Department of Health. The views expressed here are those of the authors and not necessarily those of the Department of Health.