Objective To compare the incidence and severity rating of dose prescribing errors before and after the implementation of a commercially available electronic prescribing system at a tertiary care children's hospital.
Methods Dose errors were identified using prescription review to detect errors. Severity rating was determined by five judges using a validated, reliable scoring tool. The mean score for each error was used as an index of severity.
Results Dose prescribing errors occurred in 88 of the 3939 (2.2%) items prescribed for outpatients and inpatients, and on discharge prescriptions prior to the implementation of electronic prescribing (EP). After EP, there were 57 dose errors in 4784 (1.2%) items prescribed (1% absolute reduction (p<0.001 χ2 test; 95% CI of difference in proportions −1.6% to −0.5%)). A decrease in the severity rating of dose errors was also seen: dose errors with potentially minor outcomes 35/3939 (0.89%) pre vs 21/4784 (0.44%) post (95% CI of difference in proportions −0.8% to −0.11%, p=0.009 χ2 test); moderate outcome 46/3939 (1.17%) pre vs 33/4784 (0.69%) post (95% CI of difference in proportions −0.91% to −0.08, p=0.019, χ2 test); severe outcome: 7/3939 (0.18%) pre vs 3/4784 (0.06%) post (95% CI of difference in proportions −0.31% to +0.04, p=0.11, χ2 test).
Conclusion Electronic prescribing appears to reduce rates of dosing errors in paediatrics, but larger studies are required to assess the effect on the severity of these errors and in different settings.
- Electronic prescribing
- medical order entry systems
- medication errors
- dose errors
- information technology
Statistics from Altmetric.com
- Electronic prescribing
- medical order entry systems
- medication errors
- dose errors
- information technology
Prescribing in children is complex, and errors may occur at any stage during prescribing, dispensing or administration by the nurse or parents.1 2 Paediatric doses are usually based on body weight or body surface area, which can change rapidly in younger age groups.1 The age of the child (such as corrected gestational age for premature neonates) may have an impact. In chronic conditions, growth of children needs to be monitored to ensure appropriate drug dosage modifications are made. Many medicinal products are not licensed for use in children, and this means that the formulations may not be appropriate for the doses needed in children.3 As a result, multiple calculations and manipulations may be required before a medicine reaches the child. Consequently, it is unsurprising that dose errors across all stages of the medicine-use process are considered the most common type of medication errors in this patient group. A number of contributory and causative factors have been identified, including calculation skills of parents as well as healthcare professionals and lack of availability of suitable formulations. Quality of prescribing has also been implicated, with 10-fold errors most commonly attributed to misplaced decimals, the use of tailing zeros and illegible prescriptions.4 Several strategies and solutions have been employed to minimise the incidence of dose errors, including calculators,5 educational programmes,6 7 computer software8 9 and colour-coded tapes.10 A recent systematic literature review commissioned by the Patient Safety Research Programme in England has found that of the interventions that have been used to reduce dose errors, computerised physician order entry (CPOE) or electronic prescribing (EP) is the commonest and shows most promise.11 In this study, our aim was to compare the incidence and severity rating of paediatric dosing errors before and after the implementation of a commercially available electronic prescribing with basic clinical decision support.
The study was conducted at an acute tertiary care paediatric hospital during implementation of a commercially available EP system (JAC Computer Services Ltd). The hospital, which has 314 inpatient beds, offers the widest range of paediatric specialities in the UK, including 21 medical, 11 surgical and eight diagnostic specialities, plus eight paramedical and other clinical support services including pharmacy, physiotherapy, psychology, dietetics and speech and language therapy. It became the first children's hospital in the UK to implement a commercially available electronic prescribing and medicines administration (EPMA) system in October 2005.
The system has been described in detail elsewhere.12 The actual process for prescribing was the same in the outpatient and inpatient settings. However, inpatient prescribing was mainly by junior doctors with support from the clinical team (senior clinicians, pharmacists and nurses) on the ward, whereas senior doctors (registrar or consultant) prescribed in the outpatient clinics. In terms of clinical decision support, the key aspect relevant to the findings presented here is that the system (a) alerted the prescriber if the height or weight entered was outside the expected 96th centile range based on the child's age (tables for height and weight based on age had been set locally); (b) prompted for the patient weight to be updated if the date of the previous entry exceeded the specified time period for the age of the child, for example, for older children, the weight needed to be revalidated on a monthly basis; and (c) alerted for weight change of ±10% compared with the previous weight entry. There was no dosing-specific clinical decision support, that is, the system did not perform dose calculations or minimum and maximum dose checks based on the child's age or weight, nor did it offer any dosing suggestions.
Dose errors were identified as part of a wider study of incidence and types of prescribing errors in outpatient, inpatient and discharge prescriptions over a 13-month period from July 2005 to July 2006. We prospectively collected prescriptions from renal inpatients and outpatients, and for patients discharged from the renal and urology wards, which were evaluated at a later date for errors. In the pre-EP phase, copies of paper prescriptions were obtained; post-EP, copies of printouts from the EP system used for dispensing were obtained.
Prescriptions were initially reviewed by the ward or dispensary pharmacist as part of their routine duties. The pharmacists were provided training on the definition of a prescribing error and given examples of what would be considered an error. Inter-reviewer reliability scores were not calculated, but one of us (YJ) reviewed all the errors throughout the data-collection period to ensure consistency.
A prescribing error was defined as:
a clinically meaningful prescribing error which occurs when, as a result of a prescribing decision or prescription ordering (original definition states ‘writing’) process, there is an unintentional significant (a) reduction in the probability of treatment being timely and effective or (b) increase in the risk of harm when compared with generally acceptable practice.13
Errors involving dose and frequency were considered dose errors if the single dose and/or total daily dose met the criteria described by Ghaleb et al.13 Severity rating was determined using a validated, reliable scoring method.14 Five experienced paediatric healthcare professionals (three doctors and two pharmacists) were asked to score a sample of prescribing errors in terms of potential patient outcomes on a scale of 0–10, where 0 represents a case with no potential effect and 10 a case that would result in death. The mean score for each error was used as an index of severity, whereby a mean score less than 3 was considered to be of minor outcome, a mean score between 3 and 7 was considered to be of moderate outcome, and a mean score greater than 7 was considered to be of severe outcome.15 The judges were blinded to the prescription type, that is, they were not aware if a particular error had taken place before or after the EP system was implemented.
The dosing error rate was calculated as the number of dose errors divided by the total number of items prescribed, expressed as a percentage. The 95% CIs of the difference in proportions before and after EP were calculated.16 A repeated-measures analysis of variance and the generalisability theory was used to determine the reliability of severity rating scores.14
There were 145 dose errors in 8723 prescriptions of all types, that is, inpatient, discharge and outpatient. Dose errors occurred in 88 out of 3939 (2.2%) prescriptions before EP compared with 57/4784 (1.2%) after, resulting in an absolute reduction in dose error rate of 1% (95% CI −1.6% to −0.5%, p<0.001, χ2 test). Patient demographics and dose error rates by prescription type are given in table 1.
Figure 1 illustrates a breakdown by prescription type and severity rating. There was no apparent change in error rates after implementation of the system in the inpatient setting, but a decrease was seen in outpatient and discharge prescriptions, with the latter showing most difference. Before EP, many of the dose errors were as a result of handwriting and involved incorrect dose units, for example, milligrams instead of micrograms, especially in discharge and outpatient prescriptions. After EP, these errors were eliminated. Knowledge errors, such as failure to adjust dose in the presence of renal impairment or following therapeutic monitoring and calculation errors, occurred predominantly in the inpatient setting and continued after EP implementation. A new type of error was seen in all prescription types due to mis-selection from drop-down menus, for example, selecting twice a week instead of twice a day.
Errors with potentially serious severity were eliminated in discharge and outpatient prescriptions after implementation. However, it was not possible to assess the statistical significance of this difference, as the numbers involved were small, with four errors in outpatient prescriptions and two errors in discharge prescriptions before EP.
There was good agreement among the five judges for severity rating as indicated by the generalisability coefficient value of 0.82, which is equivalent to inter-rater reliability and indicates 82% agreement. Dose errors with the potential to result in minor and moderate outcomes decreased after EP: errors with minor outcome 35/3939 (0.89%) pre- vs 21/4784 (0.44%) post- (95% CI of difference in proportions −0.8% to −0.11%, p=0.009 χ2 test); moderate outcome 46/3939 (1.17%) pre- vs 33/4784 (0.69%) post- (95% CI of difference in proportions −0.91% to −0.08, p=0.019, χ2 test). A similar trend was seen for dose errors with the potential for severe outcomes: 7/3939 (0.18%) pre- vs 3/4784 (0.06%) post-. However, this was not statistically significant (95% CI of difference in proportions −0.31% to +0.04, p=0.11, χ2 test). Examples of dose errors in each category are given in table 2.
There have been several studies17–26 of the effect of CPOE (mostly with decision support) on paediatric medication errors, a few of which have specifically focussed on dosing errors.5 27 28 Most studies show a reduction in error rates at all stages of medicine use following implementation of CPOE/EP, with one study reporting no difference in dose error rates at the site with CPOE28 and two reporting new types of errors as a direct result of these systems.26 29
Our findings are consistent with the literature, as they show that EP can reduce dosing errors, even in the absence of dose-related advance clinical decision support. The 1% absolute reduction in dose error rates (p<0.001, χ2 test) in our study is lower than that reported by others.5 27 One study showed a 15.6% reduction in dose error rates for paracetamol and promethazine,5 whereas another reported that there were no gentamicin dosing errors after EP.27 The larger reductions in dose errors in these studies may be explained by the fact that both EP systems included dose calculations or recommendations for a specific group of drugs. In contrast, our system did not include any dose calculations or recommendations and was used for all prescriptions.
It may be postulated that the declining rate of dosing errors seen in our study may be related to either another change going on in the hospital or a declining background rate of dosing errors. However, it is not possible to refute or support this in the absence of any longitudinal data locally at the study site or indeed in any UK hospital setting. While accepting all the limitations of using incident reports for error identification, a review of critical incident reports at the study site showed no change in reporting rates during the study period. There were no other changes in the hospital, for example orientation training, that may have influenced the outcome.
Nevertheless, dose errors fell from 2.2% to 1.2% of all prescriptions ordered during the study period. This small but significant reduction is an important change because the literature indicates that dose errors are common and most likely to be involved in potential adverse drug events in this patient group.2 The effect of EP on dose errors may due to a number of reasons. Patient weight is a mandatory field on the EP system, and the user is alerted to update this at regular intervals according to preset criteria, that is, every day for neonates, every week for children younger than 1 year of age and every month for all other children. Therefore, an up-to-date patient weight is always available at the point of prescribing. This is similar to a study by McPhillips et al which reported better weight documentation in the CPOE site, but found no apparent difference in the prevalence of potential paediatric dosing errors compared with a non-CPOE site.28 Another reason may be that the patient's date of birth is automatically uploaded from the hospital management system, thus ensuring that the child's exact age is also present. While age and weight were not used within the system for automated dose calculation, the need to enter and update the patient's weight, and visibility of both parameters at point of prescribing may have had an indirect influence on prescribing practice, resulting in an increased likelihood of the correct dose being calculated at the point of prescribing. Likewise improved legibility, including the inability to use unapproved abbreviations within the electronic system, may have contributed to the decrease in dose errors by reducing the risk of confusion with units, misreading or misplacing decimal points and the resultant risk of 10-fold or 1000-fold errors.
Interestingly, most of the reduction was seen in the outpatient and discharge groups, with negligible difference in the inpatient setting. A possible explanation of this may be that before EP, there were very few handwriting errors in the inpatient prescriptions, with most errors being due to inappropriate dose for indication, based on renal function or based on therapeutic levels. Presence of advanced clinical decision support may help prevent these types of knowledge-based errors that continued after EP implementation. For instance, patient-specific dosing guidance or automated dose calculations may have prevented an underdose of trimethoprim, which was judged to have a potentially moderate outcome (table 2). Automated dose calculations are expected with the next software release; it will be interesting to monitor the resultant effect on dose errors.
Recently, there have been increasing concerns that the computer system may introduce new errors.26 30 An example of how this may lead to dose errors was seen in our study. A computer-related error may have occurred due to (mis)selection of co-trimoxazole frequency from the drop-down menu, resulting in an underdose.
Our study has a number of limitations. First, a quasi-experimental, before–after study design was used. However, as with most evaluations of EP systems, a controlled trial (randomised or non-randomised) was considered unfeasible, as the EP system was implemented one ward at a time at a children's hospital where each ward involved a different specialty. Therefore, it would have been difficult to find a matched control. Similarly, randomisation of either prescribers or patients would have been difficult to control and impractical, as the implementation was across the entire ward/clinic. Time series studies may be a future option for further research at the study site but was not a feasible design at the time of the current study due to time constraints. Second, although dose errors are thought to be the most common type of error in paediatrics, the numbers in our study are quite small. However, this is consistent with the suggestion that studies involving one type of medication error, for example prescribing errors, rather than all types, yield a lower incidence of dosing errors.4 The method we used to identify prescribing errors is a process-based method that focused on intercepted errors, that is all detected errors were corrected, and so the patients experienced no resultant harm. It is possible that some errors may have been missed.31–34 However, it was anticipated that the greatest effect of the EP system, which had minimal clinical decision support activated at the time of study, would be on errors relating to legibility and clarity of prescriptions. These errors are more likely to be identified by prescription review. The focus of this study was the incidence of dose-prescribing errors rather than actual harm due to the errors. However, the severity rating scale allowed relatively objective measurement of potential patient outcomes, had the error not been intercepted. Although severity outcome was assigned by five different judges from different professions, there was 82% agreement among the judges.
Electronic prescribing appears to reduce rates of dosing errors in paediatrics, even in the absence of advance clinical decision support. In view of the fact that considerable effort goes into the development and implementation of dosing-related decision support, further work is required to study this effect in different healthcare settings.
We would like to thank C Booth, E Harrop, S Marks, S Patey and C Stebbing, for assessing the severity of the dose errors.
Funding Great Ormond Street Hospital for Children, First Databank Europe Ltd and JAC Computer Systems Ltd.
Competing interests None.
Ethics approval Ethics approval was provided by the Institute of Child Health/Great Ormond Street Hospital Research Ethics Committee.
Provenance and peer review Not commissioned; externally peer reviewed.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.