Article Text

Download PDFPDF

Computer based medication error reporting: insights and implications
  1. M R Miller1,
  2. J S Clark2,
  3. C U Lehmann1
  1. 1Department of Pediatrics and Center for Innovations in Quality Patient Care, The Johns Hopkins University, Baltimore, MD, USA
  2. 2Department of Pharmacy, The Johns Hopkins Hospital, Baltimore, MD, USA
  1. Correspondence to:
 Dr M R Miller
 Director of Quality and Safety Initiatives, Johns Hopkins Children’s Center, CMSC 2-125, 600 N Wolfe Street, Baltimore, MD 21287, USA; mmille21{at}


Background: Despite the growing use of error reporting tools, the healthcare industry is inexperienced in receiving, understanding, and analyzing these reports.

Objective: To assess the accuracy and define the epidemiology of medication error reports.

Design, setting, and patients: A retrospective cohort study of 581 error reports containing 1010 medication errors reported between July 2001 and January 2003 at a large academic children’s institution.

Main outcome measures: Correct classification and types of medication errors.

Results: Of the 1010 medication errors reviewed, 298 (30%) were prescribing errors, 245 (24%) were dispensing errors, 410 (41%) were administration errors, and 57 (6%) involved medication administration records (MAR). Following expert review, 208 errors (21%) were deleted because they had been inappropriately coded as errors and 97 (10%) were added as they were not initially coded despite having occurred. In addition, 352 medication error reports needed to have the subtype of error reclassified; 207 (59%) of these involved the reporter choosing the non-descript “other” category on the reporting tool (such as “Prescribing other”) which was able to be reclassified by expert review. The overall distribution of error type categories did not change significantly with expert review, although only MAR errors were underreported by the reporters. The most common medications were anti-infectives (17%), pain/sedative agents (15%), nutritional agents (11%), gastrointestinal agents (8%), and cardiovascular agents (7%).

Conclusions: Despite clear imperfections in the data captured, medication error reporting tools are effective as a means of collecting reliable information on errors rapidly and in real time. Our data suggest that administration errors are at least as common as prescribing errors in children. Further research is needed, not only in the area of computerized physician order entry (CPOE) for children, but also on ways to make the dispensing and administration of medications safer.

  • CPOE, computerized physician order entry
  • MAR, medication administration record
  • patient safety
  • medical error
  • children
  • inpatients
  • CPOE, computerized physician order entry
  • MAR, medication administration record
  • patient safety
  • medical error
  • children
  • inpatients

Statistics from

With the prominent and ever growing focus on patient safety in health care, the development and use of voluntary and computer based error reporting tools has flourished.1–3 The Institute of Medicine (IOM) reports “To Err Is Human” and “Crossing the Quality Chasm: A New Health System for the 21st Century” emphasized reporting systems as a strategy for learning from errors and potentially preventing their recurrence.4,5 The primary purpose of “voluntary reporting systems” is safety improvement. The focus of most reports in these systems is to identify near miss and real patient safety events that point to vulnerabilities in systems that could cause injury in the future.

Systems for reporting, analyzing, and disseminating information on safety events have been institutionalized in a number of safety critical industries including aviation, nuclear power technology, petrochemical processing, steel production, military operations, and air transportation.6,7 Reporting of near miss safety events in particular offers numerous advantages over reporting of adverse events—their greater frequency allows for quantitative analysis and there are fewer barriers to data collection, partly owing to fewer liability concerns.

The Aviation Safety Reporting System (ASRS) represents the most sophisticated and longstanding voluntary external reporting system and is widely credited with substantial improvements in the safety of airline travel in the United States over the past three decades.4,8 According to the Federal Aviation Administration, the risk of dying in a domestic jet flight was one in two million flights during the decade of 1967–76, but had decreased to one in eight million by the 1990s. Risk reduction in aviation is credited to several factors: advancing technology, a focus on teamwork training, the establishment of error reporting systems, and successfully encouraging pilots and other crew members to report errors and incidents.4,8

In addressing this, the IOM report notes: “The experience of ASRS has shown that the analysts reviewing incoming reports must be content experts who can understand and interpret these reports. In health care, different expertise is likely needed to analyze, for example, medication errors, equipment problems, problems in the intensive care unit, pediatric problems, and home care problems”.4 The IOM Committee concluded that voluntary reporting systems have a very important role to play in enhancing our understanding of the factors that contribute to errors.

What the healthcare industry does not have yet is a long history of receiving, analyzing, and acting upon voluntary safety event reports. One recent review of the first 5 years of patient safety activity after “To Err Is Human” graded error reporting systems as a “C” for its impact, primarily because of the immature ability of healthcare workers to translate error report submissions into action.9 As health care inevitably travels further down the path of implementing error reporting systems, it is important to better understand the strengths and weaknesses of such reporting tools so that we can begin to know what to do with the tremendous amount of data inundating staff focused on safety. The usefulness of this data is directly dependent on the accuracy of the reporters in defining an event, the granularity of the data, and the ability to mine the data for patterns of risk.

We have evaluated over 1000 medication events reported via an online error reporting tool at a large urban academic children’s hospital to gain insight into the accuracy of what is reported and to define the epidemiology of these error reports.


Study site and medication system

The Johns Hopkins Children’s Center is a 168 bed tertiary care medical center with multiple intensive care units and full medical, surgical, and psychiatric services for children including extracorporeal membrane oxygenation, trauma, bone marrow and solid organ transplants. During this time period our inpatient medication ordering system was largely non-computerized. Physicians wrote orders using pen and paper except for total parenteral nutrition and chemotherapy which were ordered via computer systems with built in pediatric specific decision support relative to weight based or body surface area based dosing. Pharmacists manually entered physician’s orders into a pharmacy computer system that had built in pediatric specific decision support relative to weight based or body surface area based dosing for approximately 300 of our most commonly prescribed medications and decision support in terms of cross-tabulating known patient allergies. Other medications required a manual check of dosing by the pharmacist. The pharmacy staffing included both point-of-care pharmacists working on the clinical units and centrally located pharmacists. Nursing staff and/or clerical associates manually transcribed orders onto a paper medication administration record (MAR) and manually recorded when medications were administered.

Identification of errors

At the Johns Hopkins Children’s Center a voluntary online medication error reporting system was in place from July 2001 to July 2004 as a quality and safety improvement tool. This system was developed internally and easily accessed via any and all public workstation computers on every clinical floor in the institution. Per policy, a medication error was defined broadly as “an act or omission (involving medications) with potential or actual negative consequences for a patient that, based on standard of care, is considered to be an incorrect course of action”. More specifically, via training to use the online medication error reporting system, the definition encompassed any error along the continuum of medication administration from prescribing, dispensing, recording to administration records, and administration. Any provider (nurse, pharmacist, physician, therapist) was able to enter a report by accessing this website and completing a short form with predetermined error type choices in four categories (prescribing, dispensing, administering, MAR; fig 1). In addition, the error report form permitted free text description of the event via a “Comments” box and requested that the reporter rank the event on the following final outcome scale: 0 = event did not reach patient; 1 = event reached patient but no treatment or increased monitoring necessary; 2 = event reached patient and increased monitoring required; 3 = event reached patient and unplanned treatment or increase in hospital stay (probable or actual) required; and 4 = event reached patient and life threatening or serious morbidity or death occurred. Reporters were also asked to judge separately whether the event was a “near miss”, which was defined on the reporting form as “a potential or actual medication error that did not harm the patient (level 0, 1, or 2) but would likely cause significant harm if it occurs again”.

Figure 1

 Screenshot of online medication error reporting system showing major categories and subtypes of errors.

All medication error reports generated in the Children’s Center between 1 July 2001 and 31 January 2003 were evaluated. This was accomplished by extracting the error reports for children from the database supporting the online medication error reporting system and scrubbing all identifying information except for patient age and unit of admission at time of medication error event. Since any one medication error report can actually contain several errors—for example, if a prescriber incorrectly orders the dose for a medication and the pharmacy dispenses the incorrect dose and a nurse administers the incorrect dose, then three errors occurred relative to this medication error report. Given our interest in understanding the accuracy of these error reports as reflecting systems issues across the entire spectrum of medication delivery (prescribing, dispensing, administering, and documenting), our unit of analysis was the specific errors in each of these domains for each error report—that is, we evaluated each error report and separately asked whether any error had occurred in each of the four domains of the medication delivery system and counted each domain as a separate error event.

Review process

To understand the accuracy of the error reports, three clinician experts in patient safety (two pediatricians and one pediatric pharmacist) independently reviewed all error reports and recorded any corrections to the error type classification based on information provided in the “Comments” box of the error report and knowledge of the medication delivery system within the institution. Any discrepancies between reviewers were settled by consensus review and discussion of each event by the clinician experts.

Summary data were generated to capture the input of clinician expert review on how often the error event type was altered and the percentage agreement between the original incident reporter and clinician experts. Summary data on these reconciled errors were created by cross tabulations by patient age, major and minor categories of types of errors, and reported outcomes to patient. In addition, events were summarized into types of medications (antibiotics, narcotics, etc) with cross tabluations on major and minor categories of types of errors and reported outcomes to patient.


Epidemiology of errors

Over the study period 581 unique patient medication error events were reported via our online error reporting tool. These 581 reported events encompassed 1010 different medication errors since one event may have involved more than one error—for example, a wrongly prescribed medication that is dispensed and administered to a patient would involve at least three different types of errors at each level of checking, one each at the prescribing, dispensing, and administration levels. Based on average performance, our inpatient pediatric pharmacy would have filled approximately 2.2 million individual medications during the 19 month period of analysis. During this period there were over 11 000 admissions to the Children’s Center accounting for over 75 000 patient-days. Since this count of medication errors results from voluntary reporting, creating a rate from these data is inappropriate since undoubtedly this collection does not represent the full numerator of events that happened during this time period.

Figure 2 shows the frequency of each of the four major categories of medication errors by patient age. Approximately 50% of the 1010 reported errors occurred in children aged ⩽6 years; 298 (30%) were prescribing errors, 245 (24%) were dispensing errors, 410 (41%) were administration errors, and 57 (6%) were MAR errors. The most common reported errors in each of these categories were:

Figure 2

 Distribution of reported types of medication errors by patient age.

  • Prescribing errors: incomplete orders (4%), potential overdoses (7%), potential underdoses (4%).

  • Dispensing errors: wrong dose dispensed (6%), wrong drug dispensed (4%), missing dose (4%).

  • Administration errors: nurse missed order (5%), wrong dose/IV rate given (6%), wrong time (5%) (Note: According to hospital policy, “wrong time” means more than 1 hour from when the dose was supposed to be administered).

  • MAR errors: transcription discrepancy (4%).

The most common medication categories involved in errors were anti-infective agents (17%); cardiovascular agents such as antihypertensives, vasodilators, adrenergic agents, vasoconstrictors (7%); gastrointestinal agents such as laxatives, 5HT3 receptor antagonists, enzymes, anti-flatulents (8%); hormonal agents such as anti-diabetic agents, thyroid agents, adrenal agents, pituitary agents (6%); nutritional agents such as supplements and vitamins (11%); and pain/sedative agents such as opiate agonists, anxiolytics, barbiturates, antihistamines (15%).

In terms of the final outcome to the patient, 379 (38%) did not reach the patient (final outcome = 0), 511 (51%) reached the patient but no treatment or increased monitoring was necessary (final outcome = 1), 103 (10%) reached the patient and increased monitoring was required (final outcome = 2), and 17 (2%) reached the patient and unplanned treatment or increase in hospital stay (probable or actual) was required (final outcome = 3). There were no events which reached the patient and life threatening or serious morbidity or death occurred. The majority of errors reported under final outcome = 3 involved dispensing of either the wrong dose or the wrong drug or involved administration of the wrong drug or the wrong dose. Of these 1010 reported medication errors, 173 (17%) were felt to be “near misses”. The exact error categories involved in these near miss errors predominantly focused on wrong doses from the prescribing, dispensing, or administration domains.

Accuracy of error reports

Reconciliation of these 1010 medication errors by the three clinician experts with determination of consensus resulted in a reduction in the number of medication errors to 899. Table 1 shows the counts and percentage distribution of each medication error domain (prescribing, dispensing, administering, and documenting) both before and after expert review and reconciliation. There was no substantial change in the volume of events or percentage of overall events within any one domain after expert review. The most value for review of these reports came from the free text “Comments” box. All of our individual error reports contained some form of a comment. Although difficult to quantify, a cursory review of these comments found that only about 5% were not helpful in clarifying the event—for example, the comment “Zinkham team notified” was not helpful in understanding the event.

Table 1

 Counts and percentage of errors in each medication error domain before and after expert review and reconciliation

In total, 208 (21%) of the initial medication errors were removed because they were inappropriately coded as errors and 97 (10%) errors that were not coded were added. As an example of errors that were inappropriately coded, an error report stated that an event involved “Wrong dosage form given” (an administration error) and “Wrong dose dispensed” (a dispensing error). Our review of the free text field of this report found the following: “Pharmacy tubed 0.4 mg of Zantac instead of 10 mg ordered. RN administered only 0.4 mg of the 10 mg ordered. MD and pharmacy aware, pharmacy will tube up right amount.” While it is clear that a wrong dose was dispensed, our reviewers agreed that the “Wrong dosage form given” event did not occur. Dosage form relates to issues such as tablets versus liquid. A further example in this category of errors that were inappropriately coded involves an event listing “Dispense other” (a dispensing error) and “Administration other” (an administration error). A review of the free text found the comment: “Dr X notified of error, order written to okay dextrose in PAS”. Based on our knowledge of our medication system, this event involved a case where the final dextrose concentration in a parenteral nutrition bag deviated by more than 10% from the order. According to hospital policy, administration of this solution can then only proceed if the prescriber is aware of this difference and agrees to it. The text comment here shows this, and once the prescriber had written the order stating that the solution with its dextrose concentration was okay to administer, no administration error could have occurred. We therefore removed the “Administration other” error from this report. As an example of errors that were not coded despite having occurred, an error report stated that an event involved “Wrong time” (an administering error). Our review of the free text field of this report found the following: “Ampicillin written as Q6 (but illegible and appeared as Q8) so RN transcribed and administered med as a Q 8 hour med. RN notified of error, dosage frequency clarified, and will be administered as Q6”. In our review we added the error of “Transcription discrepancy” (an MAR documenting error) since the illegible handwriting clearly contributed to this event.

In addition, 352 reported medication errors were modified during expert review in terms of the subtype of medication error that occurred. The majority of these (n = 207) involved the error reporter choosing the non-descript “Other” category on the reporting tool (Prescribing Other, Dispensing Other, Drug Administration Other, MAR Other). Based on expert review, knowledge of the event from the “Comments” box on the error reporting form, and our systems knowledge of our medication processes, the reviewers were able to modify all of these errors appropriately—for example, “Prescribe other” changed to “Prescribe wrong route ordered”). Of the 97 errors that were missed in the initial reporting, examples involved cases where either the order was incorrect or the dispensed dose was incorrect. If the administration level checks also failed and the patient ended up receiving a wrong dose, our systems based analysis approach required that an administration error of “Wrong dose/IV rate given” was added to the error report. Not uncommonly, the reporter did not process through the entire medication system in determining the levels at which errors occurred. Overall, 21% of the reconciled events involved more than one individual error type occurring—for example, wrong dose dispensed and wrong dose given.


Our analysis of inpatient pediatric medication errors reported via a voluntary online error reporting tool shows that most of these reports are accurate and reflect true errors based on expert clinician review. Most of the errors fell fairly equally into the categories of prescribing, dispensing, or administration errors. Only a minority of the errors (12%) led to additional monitoring of the patient, and only 2% resulted in unplanned treatment or increased length of hospital stay. None of the reported medication errors in our study resulted in life threatening or serious morbidity or death for the patient.

Since the release of the “To Err Is Human” report in 1999, the development and use of computerized error reporting systems has been deemed one of the five major areas of patient safety efforts nationally. Wachter’s recent review9 bemoaned the fact that, despite all the interest and dissemination of error reporting, there has been little discussion of what is being done with all the submitted reports. He commented: “This is the Achilles heel of error-reporting systems: the flawed notion that reporting has any intrinsic value in and of itself.” Wachter goes on to state that “in healthcare, errors are so frequent, the number of man-machine interfaces are so voluminous, and we have so much catching up to do that the average patient safety officer would have a full plate for the next five years without a single new report.”9 While we agree with this description, we attempted to answer a logical first question—namely, before one starts acting on these error reports, how likely are they to accurately reflect the errors at hand? Although there is a growing literature on the use of computerized error reporting tools, even in children, there is also a debate on completeness of reporting.2 A recent series of articles from the University of Iowa discusses the reasons why errors may not be reported, why error reports may be inaccurate, and estimates the rate of medication errors actually being reported.10–12 They conclude that, for accurate incident reporting, the practitioner must be able to (1) recognize that an error has actually occurred, (2) believe that the error is significant enough to warrant reporting, and (3) overcome any embarrassment of having committed an error and the fear of punishment.12 Our analysis of pediatric data predominantly addresses the first of these needs—namely, the ability of the practitioner to recognize that an error has occurred. Our reporting tool required that the reporter should be able to think at a systems level covering the complete medication system from ordering and dispensing to administration and documentation. For example, if the nurse administered a wrongly dispensed dose, the nurse committed an error as well as the dispenser. While we undoubtedly found that error reporters did not perform perfectly in this respect, particularly around recognizing MAR transcription errors, overall this imperfection had little impact on global trend analyses of the error reports. The bulk of reclassification of errors involved clearer delineation of the error from the reported “Other” categorization. A recent study on the impact of reporting data found that error event classification can “enhance or impede organizational routines for data analysis and learning.”13 Our analysis of data at a large academic children’s center found that, even with reporter classification error, global error trends which would guide organizational focus and learning were not substantially influenced.

Our data generally support the few published estimates of the scope of medication errors in hospitalized children. Selbst and colleagues14 reported that nurses and physicians in pediatric emergency departments were equally involved in medication errors, and in only 12% of these medication errors was additional monitoring of the patient needed. Our finding of nearly equal distribution of errors among prescribing, dispensing, and administering functions is in agreement with the idea that all disciplines involved in taking care of children are prone to error. Marino and colleagues15 studied medication errors in a pediatric teaching hospital and found that more errors occurred in supporting activities (such as transferring the order to another record) than in primary activities (prescribing or preparing the medication for administration). While it is difficult to compare this study directly with ours, our finding of a significant percentage of MAR transcription errors supports the idea that these “supporting activities” are indeed highly error prone. Several articles studying hospitalized children have reported that anti-infective agents are the most frequent drug involved in errors.16,17 Furthermore, Ross and colleagues16 reported that only 9.2% of pediatric medication errors required some active patient intervention and only 4% were classified as major events. This is similar to our findings of 12% and 2%, respectively.

Our data, however, do not agree with other published studies in some respects. Kaushal and colleagues17 reported that 74% of medication errors and 79% of potential adverse drug events occurred at the ordering stage, and from their data they extrapolated that 93% of the potential adverse drug events in their study could have been prevented with computerized physician order entry (CPOE). In our study, roughly 70% of the reported medication errors occurred at the dispensing or administering level, neither of which would necessarily be easily alleviated with CPOE. It is likely that differences in error data sources contribute to these two different perspectives. Our study relied solely on frontline caregiver error reports and may thereby more readily identify those errors that never end up in the medical chart. For example, if a medication is dispensed incorrectly, our reports would identify it, but it is unlikely that a patient’s medical chart will note this error especially if the error was identified and corrected before administration to the patient. Earlier work by Wilson and colleagues18 in the UK reported that 72% of errors at a pediatric institution were due to doctors and 68% of errors were detected before drug administration. Our data showed that only 30% of the reported errors were attributable to physicians and 60% of medication errors reached the patient. The most likely explanation for these differences is, again, the differences in the source data—that is, we used error reports from an easy to use computerized error reporting tool and also the harder to define differences in safety cultures. The study by Wilson et al was conducted before the recent 5 year focus on errors following the IOM report and before the recent definition, evolution, and ability to measure and promote safety cultures. Manual paper incident report were used which probably introduced biases based on reporting burden, and it is unknown what the safety culture was in the period before the IOM report in terms of promoting error reporting in a non-punitive fashion.

Our study has some limitations. Firstly, it relied on voluntary error reporting, the use of which is directly related to the safety culture of the institution. Our institution has made patient safety the top priority in many open, public, and non-punitive ways.19–21 Given this culture and given the observation that it is not uncommon in our error reports to find staff self-reporting, it would be difficult to think that our error data are skewed by fear of reprisal. This is not to say that our error reports encompass the entire universe of errors that occurred during this time period at our institution. This is undoubtedly not the case but, based on our open discussions of errors, it is not thought that this sample of errors as detected from the error reporting system would be systematically different from all the errors at our institution. However, the error reporting system is used with different frequency by different provider groups. Most reports are entered by nursing and pharmacy staff, who are perfectly positioned to detect errors at all steps in the medication delivery process.

Secondly, one can question the generalizability of our data from a single institution with a “home grown” error reporting system. As a large academic children’s center that was not overwhelmingly computerized in terms of the medication system at the time of the study, our experiences and types of errors should be comparable to those in many other children’s and adult institutions, most of which would have similar medication systems. The content of our “home grown” error reporting system is representative of many other “home grown” and proprietary error reporting systems entailing a mix of check boxes to help categorize errors and free text fields. We have no reason to believe that the content of our error reports would be systematically skewed compared with those of other institutions.

This examination of computerized medication error reporting for children highlights that such tools are effective as a means of collecting reliable information on medication errors rapidly and in real time. Based on this experience with our “home grown” reporting tool, our institution implemented a computerized error reporting system in July 2004 that encompasses all types of errors, not just medication errors. Our study shows that errors in children are numerous and that the majority do not have significant consequences for the patient. Importantly, our data show that all providers involved in care are error prone in relatively equal proportions. With the predominant push in patient safety to implement CPOE, our findings suggest that a substantial percentage of pediatric medication errors may not be alleviated by CPOE focused solely on the prescribers. Our data support the idea that more robust computerized systems that also include dispensing and administration processes may be needed so that the correct doses are dispensed and administered and medication administration times are accurately adhered to. Further research is needed, not only in the area of CPOE for children, but also into ways to make the dispensing and administering of medications safer.



  • Funding: none.

  • Competing interests: none.

  • This research was approved by Johns Hopkins Medicine Institutional Review Board, application number 03-10-10-03e.

    The authors of this manuscript had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.