What this study adds

  • We developed a method for summarising prescribing error data for presentation to clinical specialties.

  • Lead clinicians found this feedback to be useful and acceptable.

  • Ward pharmacists identified prescribing errors in 9.2% of newly written medication orders in one clinical directorate.

  • Incident report data is subject to gross under-reporting and is not useful in providing quantitative estimates of error rates.

  • Routinely providing feedback for each consultant team or for individual prescribers will require more focussed data collection.

Introduction

Prescribing errors are common and have the potential for serious patient harm [1]. In the UK, hospital pharmacists identify and resolve prescribing errors as part of their routine daily monitoring of all prescriptions. In one study, pharmacists identified a prescribing error in 1.5% of all inpatient medication orders written, one quarter of which were potentially serious [2]. However, when prescribers involved with potentially serious errors were interviewed, most stated that they were unaware of having made any errors in the past [3].

Such a lack of awareness may be because the systems of prescribing, dispensing and administration of drugs involve many people, often of different professions. This could be regarded as a safety feature as it may increase the chance of an error being identified. However, another consequence is that errors are most commonly identified by someone other than the original prescriber. The immediate aim on identifying an error is usually to resolve it, with feedback to the individual prescriber taking a lower priority. Hence prescribers rarely have the opportunity to learn from their prescribing errors. It has been suggested that increasing such feedback could increase the efficiency of learning [4, 5]. However, even where feedback is given, there is no opportunity for prescribers to benchmark their practice against that of others.

Provision of feedback about practice has been found to be useful in other areas of health care. A monitoring method known as the variable life-adjusted display (VLAD) was developed for use in cardiothoracic surgery for monitoring death rates for individual consultants or units and benchmarking against peers [6, 7], and is now in routine use. It would seem logical to develop similar methods for providing feedback about prescribing errors. We therefore conducted a pilot study in one clinical directorate to explore the practicalities of obtaining, analysing and presenting prescribing error data for feedback to medical staff. Our objectives were to explore the feasibility of routinely obtaining data on prescribing errors together with meaningful denominators, to design a comprehensive summary of these data, and to assess the feasibility and acceptability of presenting data on prescribing errors to medical staff.

Methods

Setting

The study took place during a 4-month period (February–May 2005 inclusive) in one clinical directorate in a London teaching trust. The directorate is comprised of ten specialities, most of which were represented at each of the two main hospital sites. We studied all wards linked to this directorate. Wards received a pharmacy service typical of that in UK hospitals; a pharmacist visited each ward each weekday to check that all medication orders were clear, legal and appropriate for the patient, check patients’ drug histories, resolve any problems identified, and supply any non-stock medication required. The study was approved by the local research ethics committee.

Methodological issues

The feasibility and effectiveness of two different data collection methods were explored. One involved ward pharmacists collecting data on newly written medication orders on one day each fortnight; the other involved accessing incident report data.

Selection of an appropriate denominator is essential when presenting prescribing error data [1]. In addition to the number of newly written medication orders, we obtained data on two other measures of activity for each clinical specialty. These were occupied bed days (OBDs) and finished consultant episodes (FCEs). Each hospital stay may be associated with more than one FCE.

Feedback could theoretically be given at the level of the individual prescriber, the consultant team, the clinical specialty or the whole organisation. For the purposes of this pilot study, we decided to explore the feasibility of providing feedback at the levels of consultant team and clinical specialty.

Data collection

A letter was sent to all consultants within the selected directorate, giving details of the study. Consultants were asked them to inform us if they preferred not to be included.

We used a published definition of a prescribing error, developed using consensus methods [8] and recently used by the UK Department of Health [9]. A prescribing error was therefore defined as a prescribing decision or prescription-writing process that results in an unintentional, significant: (i) reduction in the probability of treatment being timely and effective or (ii) increase in the risk of harm, when compared to generally accepted practice. According to this definition, prescribing without taking into account the patient’s clinical status, failure to communicate essential information and errors in transcribing (from one prescription to another) are all considered prescribing errors. Failures to adhere to standards such as prescribing guidelines or the drug’s product licence, are not considered prescribing errors where these reflect accepted practice. On one day each fortnight, pharmacists providing ward pharmacy services to the twenty wards within the selected directorate were asked to record data on any prescribing errors identified on newly prescribed regular, when required and discharge medication. Data were collected on four alternate Wednesdays and then on four alternate Mondays. We excluded any errors that related to medication orders previously screened by a pharmacist. The pharmacists also recorded the number of newly prescribed regular, when required and discharge prescription items seen, and the consultant team. Neither patients’ names nor hospital numbers were recorded. To minimise workload, pharmacists were also asked to indicate whether the error was one for which they would usually have completed a medication incident report, in which case the research team completed one on their behalf. Each pharmacist was given a verbal briefing about the study, together with a written summary, by one of the research team before data collection began. A total of 30 pharmacists collected data over the course of the study,

A work-sampling approach was used to estimate the additional time required for pharmacists to collect the data. A research pharmacist accompanied four different ward pharmacists on their visits to a total of five wards and recorded the total time taken. A random interval work-sampling device (JD7 Random Reminder, Divilbiss Electronics) was used to identify 32 time samples each hour for which the pharmacist’s activity was recorded as being related or unrelated to the study.

All prescribing errors reported as medication incidents for the study directorate were retrieved. We estimated the percentage of all prescribing errors that were reported as medication incidents.

Feedback to prescribers

Developing an easily understood summary for feedback to clinicians was a key part of the study. The summary was developed using repeated prototyping, exploring many different methods for presenting the data. This parallels the process used to develop the VLAD charts now prevalent in monitoring outcomes in cardiac surgery [6].

The final feedback report consisted of three graphical summaries, a list of errors identified for the team concerned and a commentary. The first graph was a stacked bar chart to show the number of new medication orders with and without an error for each team, with the identity of other teams concealed. Second was the proportion (with 95% confidence interval) of new medication orders that contained at least one error for that team and for all others combined. Third was the cumulative number of new medication orders with at least one error plotted against the cumulative number of new orders written by the team concerned, with a line representing the average error rate of all other specialties. Following feedback of the relevant report to the lead clinician of each specialty, the consultants were asked for their comments by email and informal interview.

Results

No consultants requested to be excluded.

Prescribing errors recorded by ward pharmacists

For patients within the directorate studied, 4,995 medication orders were written. Of these, 462 (9.2%; 95% confidence interval 8.5 –10.1%) contained at least one prescribing error. The total number of prescribing errors identified was 474.

The errors identified each day are summarised in Table 1. For the first four data collection days (Wednesdays), one or more errors were identified in 9.8% of 2,158 medication orders. For the four subsequent data collection days (Mondays), one or more errors were identified in 8.8% of 2,837 orders. This difference of 1% is not statistically significant (95% confidence interval −0.8 to 2.6%). There were one or more errors in 253 (9.5%) of 2,677 medication orders at site 1, and in 209 (9.0%) of 2,318 at site 2 (95% confidence interval for the difference −1.2 to 2.0%). Error rates by specialty and site are shown in Fig, 1. The numbers of OBDs and FCEs for each specialty are summarised in Table 2, together with ranked error rates. The relative order of specialties in terms of error rate differs dramatically depending on the denominator used.

Table 1 Medication orders examined and prescribing errors identified on each study day
Fig. 1
figure 1

Error rates by specialty and site, with 95% confidence intervals. The dotted line represents the average value for all specialties. Each specialty is represented by a letter, with numbers 1 and 2 representing the two sites

Table 2 Occupied bed days (OBDs) and finished consultant episodes (FCEs) for each specialty on each site, based on overnight stays only, for February to May 2005 inclusive

An average of nine minutes per ward per day was observed to be required to collect these data. Ward pharmacists reported that it would be feasible to collect these data on a monthly or less-frequent basis; the additional time required for documentation meant that it was not considered feasible to provide the data more often or on an ongoing basis.

Incident reports

For 19 (4%) of the 474 errors identified by ward pharmacists, the pharmacist indicated that they would like the research team to report the error as a medication incident.

For non-data collection days, only eight prescribing errors were actually reported for the study directorate. Assuming that similar numbers of errors are identified by pharmacists on each working day, there would have been approximately 4,400 identified over the 75 days of the study period on which data were not collected. We therefore estimate that 8 (0.2%) of about 4,400 prescribing errors identified by pharmacists were actually reported as medication incidents on the non-data collection days.

Feedback to prescribers

Although we had wanted to provide feedback to individual consultant teams, it was often not possible to identify the relevant consultant team from the drug chart. We therefore provided feedback at the level of the clinical specialty. Reports were produced for each specialty across both hospital sites, with the exception of one very large specialty for which separate reports were produced for each site. Reports were sent to the relevant 11 lead clinicians; an example is given in appendix 1. Seven consultants responded by email and two were interviewed. All found the feedback helpful and interesting; most asked if they could receive similar reports routinely.

Discussion

The key issue underlying this study was the notion that giving feedback to medical staff about prescribing errors is a means of driving down error rates. This pilot study has established that it is feasible to collect data suitable for this purpose, that such data can be presented to consultants in a form that is easy to understand, and that consultants find such a process acceptable. Whether implementation of such feedback as a routine part of a hospital’s operation would be an effective means for reducing error rates remains to be seen and requires a much larger study.

At least one error was identified in 9.2% of all newly written regular, when required and discharge medication orders screened by pharmacists. There have been few studies of prescribing errors in UK hospital inpatients. Dean et al. [2] previously reported an error in 1.5% of all medication orders written, across a London teaching hospital. The error rate in the present study was therefore considerably higher. However, there are some key methodological differences. First, in the present study, we asked pharmacists to focus specifically on newly written medication orders. Second, we included in the denominator only those medication orders seen by the pharmacist. Our previous study [2] included in the denominator all medication orders written for all patients, regardless of whether or not a pharmacist had seen their chart. Third, we included only regular, when required and discharge medication orders. Medication orders for once-only medication and intravenous infusions, which are likely to be associated with lower error rates [2], were excluded. There have been two other studies using the same approach as we used here. Haw and Stubbs [10] identified errors in 2.2% of all items reviewed in psychiatric inpatients [10], and in a 880 bed hospital, Tully and McElduff [11] reported an error in 10.5%, a very similar rate to that reported here.

Our results provide no evidence to suggest that error rates are higher on Mondays, as often perceived. Instead, slightly more new orders are examined on Mondays and consequently the absolute numbers of errors identified may be higher. There was also no difference between hospital sites.

There was considerable variation between specialities in terms of the error rates detected. Specialties A1 and A2 had higher than average error rates; these were specialties with a high patient turnover. However in general, confidence intervals were wide and it is difficult to draw firm conclusions.

The percentage of prescribing errors reported as medication incidents was very low. Our results suggest that pharmacists perceive an incident report form to be merited for 4% of the errors identified, but forms are actually completed for about 0.2%. Reporting an error requires awareness of its occurrence, knowledge of how to report it and motivation to do so; other studies also suggest gross under-reporting [12, 13] and present some reasons why [14]. Incident report data cannot be used to draw quantitative conclusions about error rates.

The denominator used to express the error rate had a significant impact on the ranking of error rates between specialties. Very different conclusions could be drawn depending on whether the denominator is the number of new medication orders examined, FCEs or OBDs. We suspect that this is partly due to medication orders written on admission being associated with higher error rates, due to errors in medication history taking. The number of medication orders examined is likely to be the most meaningful denominator, but future work should differentiate between those written on admission and those written during the remainder of the patient stay.

Challenges and limitations

A number of methodological challenges were identified. First, for many patients, particularly those at site 2, the relevant consultant team was not documented on the drug chart. Patients were instead documented as being under the care of the relevant specialty. As a result, it was not possible to report data for each consultant team. We initially intended to provide feedback at the level of the consultant team as we felt that encouraging consultants to take responsibility for the quality of prescribing within their team is important. It is likely that data could be collected by consultant team on site 1, or if hospital numbers were recorded, the hospital information system could be used to identify consultants for patients on site 2. Second, in line with previous studies [2] we suspect there was variation both in pharmacists’ ability to detect prescribing errors [15] and in their diligence in reporting. This is likely to be greatest for the more minor errors, but has important implications for the interpretation of these data. Third, for the purposes of this pilot study we chose not to assess severity or type of errors. Fourth, we were not able to elicit more formal feedback from consultants, such as using structured questionnaires, within the time available. Finally, there may have been inaccuracies in the specialties documented for patients transferred between specialties whose drug charts were not updated.

Implications for future work

It is anticipated that if such data collection were to take place for ongoing routine use, data collection would be carried out less often, probably monthly. However we collected fortnightly data during this pilot study in order to obtain a sufficient sample. It may also be argued that if the feedback has its desired effect in reducing errors, there will be less work involved in data collection.

We spent considerable time focusing on the definitions of an error used and training our pharmacists to collect these data. Even so, we feel there was variation between pharmacists in terms of the data collected. Future studies should give sufficient resource to pharmacist training, providing reminders about the study and providing real-time feedback about any events recorded that do not meet the study’s definition of an error.

It is widely believed that electronic prescribing systems will facilitate collection of data of this type. However, appropriate reporting facilities would need to be set up. To obtain equivalent data to that reported here, any such system would need to provide data on new prescriptions written per team per time period.

Finally, classifying errors according to type and clinical severity should be considered for future work.

Conclusion

It is feasible to provide feedback on prescribing errors at the level of the clinical team, and acceptable to the consultants involved. We have designed a method for summarising these data for individual clinical specialties. Providing feedback for each consultant team or for individual prescribers will require more intensive data collection methods. Incident report data is subject to gross under-reporting when compared to data recorded by ward pharmacists. Further work should include a larger study to find out whether providing feedback in this way can lead to a measurable reduction in prescribing errors.