Background Medication errors (MEs) and adverse drug events (ADEs) are both common and under-reported in the intensive care setting. The definitions of these terms vary substantially in the literature. Many methods have been used to estimate their incidence.
Methods A systematic review was done to assess methods used for tracking unintended drug events in intensive care units (ICUs). Studies published up to 22 June 2007 were identified by searching eight online databases, including Medline. In total, 613 studies were evaluated for inclusion by two reviewers.
Results The authors selected 29 papers to analyse; all studies took place in an ICU, were reproducible and reported ICU-specific rates of events. Rates of MEs varied from 8.1 to 2344 per 1000 patient-days, and ADEs from 5.1 to 87.5 per 1000 patient-days. The definitions of ADE and ME in the studies varied widely.
Conclusions Much variation exists in reported rates and definitions of ADEs and MEs in ICUs. Some of this variation may be due to a lack of standard definitions for ADEs and MEs, and methods for detecting them. Further standardisation is needed before these methods can be used to evaluate process improvements.
- Medication errors
- medication errors/cl—classification
- medication errors/mt—methods
- intensive care units
- adverse event
- medication safety
Statistics from Altmetric.com
- Medication errors
- medication errors/cl—classification
- medication errors/mt—methods
- intensive care units
- adverse event
- medication safety
The rate of medication errors (MEs) and adverse drug events (ADEs) for patients admitted to the intensive care unit (ICU) is greater than that for patients admitted to general medical wards1 for several reasons. First, ICU patients receive more medications than patients on other hospital wards.1 ,2 Second, most medications in the ICU are given intravenously, and calculation of infusion rates is often required; both of these characteristics may create more opportunities for error.2 Third, most patients in the ICU are sedated and are therefore unable to identify potential errors themselves.2 Fourth, patients in the ICU have little physiological reserve, potentially increasing risks of harm from medication-related errors. It is thus important to have methods to accurately measure rates of MEs and ADEs in the ICU.
The Institute of Medicine (IOM) provides definitions for MEs and ADEs.3 MEs are any errors occurring in the medication-use process. Examples of this are wrong dosage prescribed or wrong dosage administered.3 An ADE refers to any injury due to a medication.3 Although ADEs are often caused by errors, this term does not necessarily mean that an error occurred; an example of this is a patient in whom an allergic reaction to a drug occurred who was not known to have any allergies.3 A preventable ADE occurs when an ADE results from a preventable ME (ie, any error in the prescribing or transcribing of a medication order, or in the dispensing, administration or monitoring of a medication).4
Many different definitions and methods for tracking MEs or ADEs have been used in the ICU setting. The purpose of our study was to systematically review the published literature regarding MEs and ADEs that occur in the ICU, specifically highlighting the differences in event rates as a function of the terms used to define events and the techniques used for detection.
We searched for all relevant studies published before 22 June 2007, when the final search took place. Studies were identified using Medline (1950–present), Embase (1980–present), Biosis (1969–present), CINAHL (1982–present), IPA (1970–present), DARE (1996–present), CDSR (1995–present) and ACP Journal Club (1991–present) Databases.
For the Medline search, we used the following strategy: MeSH terms ‘intensive care units’ and ‘intensive care’ and multipurpose words ‘intensive care’ and ‘icu$’ were used. To identify a wide range of methods for collection of MEs, MeSH terms ‘medication errors’ and ‘adverse drug reaction reporting systems’ were used. Also, multipurpose terms ‘((medication or drug or prescri$) adj2 (error$ or mistake$)), (incident$ or voluntary$) adj2 report$’ and ‘adverse drug event$’ were used. The ultimate search strategy was the union of ICU terms and the ME terms. The search was restricted to articles published in English. Comparable searches were run in the other databases.
To be included, studies had to take place in an ICU, have original data, describe a method of measuring MEs or ADEs, include a rate of ME or ADE occurrence, and have sufficiently detailed methods so that the study could be replicated. Articles were excluded if they did not provide numerical ME or ADE rates, or if the rates provided were pooled with wards other than the ICU. We also excluded abstracts from conferences, letters, comments, opinion pieces and editorials.
All abstracts were reviewed by two investigators (AW and KL). Full manuscripts of any potentially eligible studies were obtained. Any disagreements in each round were resolved by discussion between the two reviewers and evaluation by the senior authors (PD and NA) as necessary.
Study data were grouped by the methods used to detect MEs or ADEs. Voluntary reporting systems involved ICU staff using a paper or computer-based system to provide details of a potential or actual unintended medication events. Prescription review involved a pharmacist reviewing medication orders and documenting errors found. Observational techniques involved a trained observer following a prescription for some portion of the process from when it was written to its administration to examine the process for errors. Trigger tools involved a review of patient records, with either chart review or computer programs for evidence of indications of ADEs such as antidote use, or electrolyte abnormalities that could be due to medications. Multifaceted methods combined several of the aforementioned techniques.
Information was collected on the type of ICU studied, the country in which the study took place, the type of hospital (academic vs community), the number of centres involved, the definition of events measured, the methods used to detect events and the event rates detected.
In studies involving more than one ICU or where a pre–post design was used, weighted averages were calculated for each individual study when possible to provide a single rate of MEs or ADEs per 1000 patient-days. This was done by multiplying the rate for each separate group by the fraction of the total number of patient-days examined in that group. The rates were then added together to obtain an overall weighted average. Studies with the same units of measurement (eg, ADE/1000 patient-days) were grouped together for comparison.
The initial literature search yielded 613 abstracts. After review of these abstracts, 174 full articles were obtained. Forty-five articles were excluded because they did not have original data, 37 because they did not report relevant outcomes, 14 because the severity of the events was not reported, four because results combined ICU and non-ICU specific data, four because the methods did not provide enough information to replicate the study and 41 for two or more of the above reasons. In the end, 29 articles remained for analysis.1 ,5–32
All of the studies came from first-world countries, and most (24/29) took place in an academic tertiary care hospital (table 1). Specifics of the durations of studies, countries of origin, types of ICUs and methods of study are summarised in table 1.
The studies were grouped by the outcome reported (eg, ADEs, preventable ADEs) as shown in tables 2–5. The predominant methods of event detection in the studies were prescription review (n=7) and multifaceted techniques (n=7), followed by observational techniques (n=6), voluntary reporting (n=5), trigger tools (n=3) and comparison of two of the previously mentioned methods (n=1). The majority of these studies had different definitions of the events being measured. There was substantial variability in event rates, regardless of the specific outcome; in general, there was a one to two order of magnitude difference in rates across studies even when the same type of event was reported. For instance, the range of ADE/1000 patient-days ranged from 2.4 to 87.5 (table 2).1 ,5–12 Not unexpectedly, MEs were more common than ADEs, and more events were identified when multifaceted methods of detection were used.
We found a wide variation in reported rates of medication-related events. We believe that much of this large variability was due to differences in: (1) definitions of the same type of event and (2) methods used to detect events.
MEs compared with ADEs
MEs include any error from prescribing through to administration and monitoring of a drug but which do not necessarily cause harm. Conversely, ADEs indicate that patient harm has occurred. Because most MEs do not result in harm, it is logical that MEs are more frequent than ADEs. To illustrate this, Rothschild et al found the rate of MEs to be 129.5/1000 patient-days, while the rate of ADEs was only 37.6/1000 patient-days.12
Although MEs do not lead to harm in many cases, they provide the unique opportunity to identify the need for system changes, which have the potential to prevent harm to patients. Measuring ADE rates is also useful, since this identifies actual situations in which patients are harmed and also allows for change for safer policies.
Variability in definitions of events
Another reason for variability among the studies was the diversity of definitions used for the same type of event. For example, 14 studies included in this paper reported MEs per 1000 patient-days as an outcome (table 3).1 ,7 ,8 ,12 ,16–25 Two of these studies provided no definitions for this term, while the 12 other studies each used a different definition. Some studies focused on only one aspect of the medication process (eg, prescribing or administration), while others focused on all aspects. Nevertheless, even in the 10 studies that looked at all aspects of the process, there was still substantial variation in the definitions of MEs among these studies (table 3).1 ,8 ,12 ,16–22 Some authors defined MEs similar to how we have done (errors in drug prescribing, transcription, dispensing, administration and monitoring),8 ,12 ,21 while other authors provided more vague definitions such as ‘potential or preventable ADEs’ or ‘all events where treatment or observation differed from a planned one.’1 ,18
ADE was used as an outcome in 11 studies (table 2).1 ,5–12 ,14 ,15 Although one study did not provide a definition for this term,14 nearly all the others had a common theme—patient injury. Nine of these definitions defined ADE as injury or harm caused by a medication, while one gave a vague definition of ‘medication-related adverse event.’6 Overall the concept of ADE among studies was more consistent than that for MEs. Nevertheless, it would still be useful to have a standard definition for an ADE, as this would likely reduce the substantial variability in rates among studies.
The reason for this diversity of definitions is likely related to that fact that no standard definition is accepted by all the major organisations related to medication safety. For example, the IOM definition of ME is different from that of the National Coordinating Council for Medication Error Reporting and Prevention, while the Agency for Healthcare Research and Quality does not provide a definition for this term. Standardisation of definitions between these important groups would likely set a precedent for researchers in this area. The IOM definitions were used most commonly in papers included in this study, perhaps suggesting they may be accepted more readily by the research community.
Variability in methods of detecting events
In general, multifaceted methods for measuring events were associated with higher rates of event detection. However, when studies using multifaceted methods were compared, substantial differences in ME rates were found, varying from 18.6 to 146.1 per 1000 patient-days.1 ,8 ,12 ,16 The studies which reported 18.6 and 22.1 MEs per 1000 patient-days did not include observation, whereas the studies that reported 129.5 and 146.1 MEs per 1000 patient-days included observation methods.1 ,8 ,12 ,16 The study by Rothschild et al reported 129.5 MEs/1000 patient-days, but only 37.6 ADEs/1000 patient-days, suggesting that the addition of observation may increase the sensitivity of detecting errors, but is not associated with an increased detection of harm.12
For ADEs, rates in these studies ranged from 5.1 to 37.6 ADEs per 1000 patient-days.1 ,8–12 Three of these studies used similar methods, involving voluntary and solicited incident reporting and daily chart review during weekdays.1 ,9 ,11 Despite this commonality, rates were 5.1, 14.4 and 33 ADEs per 1000 patient-days.1 ,9 ,11 The study which reported 5.1 ADEs/1000 patient-days included only preventable events, whereas the other two studies included all events that caused patient harm.1 The reason for variation between the latter two studies may be partly because the studies were done in different types of ICUs.9 ,11 The fourth study by Rothschild et al that reported 37.6 ADEs/1000 patient-days incorporated direct continuous observation into their methods, in addition to voluntary and solicited incident reporting and daily chart review during weekdays.12 This difference in methods used to detect adverse events likely accounts for the higher detection rate of the latter study.
Substantial variation in event rates was also seen with voluntary reporting methods. Reasons for this variation include the anonymity of reporting, the hospital safety culture, the staff education on incident reporting and the presence of a non-punitive policy related to reporting. In six studies using voluntary reporting, which measured MEs/1000 patient-days, all reported having non-punitive incident reporting systems, and four out of these six papers described educational programmes for staff and strategies to encourage staff to report incidents.17–22 The error rates in these studies still ranged from 8.8 to 241 MEs/1000 patient-days.17–22 Paradoxically, the study that reported 241 MEs/1000 patient-days did not describe any intervention to encourage reporting, while the study that reported 8.8 MEs/1000 patient-days described a strategy to encourage staff to report even trivial incidents.17 ,22 This paradox was likely related to the difference in terms used to define events. In the study reporting 8.8 MEs/1000 patient-days, the definition of ME was specific (ie, a dose of medication that deviates from the physician's orders which reaches the patient).17 Conversely, in the study reporting 241 MEs/1000 patient-days, ME was defined much more broadly as a ‘mistake made at any stage of the provision of a pharmaceutical product to a patient.’22 Observation techniques involve study personnel watching nurses prepare and administer medications and recording any discrepancies from what is ordered in the patient's chart. The rates of observed errors in medication preparation and administration varied substantially (2.8, 7%, 8.8% and 33.9% of the total number of nurses' activities) after wrong time errors were excluded.25–28 Although the techniques described in these four studies were similar, the definition of medication preparation and administration errors differed. For instance, the study by Calabrese et al gave a vague definition (‘any preventable event that may cause or lead to inappropriate medication use or patient harm’),26 while the study by Tissot et al gave a specific definition including ‘wrong drug preparation, dose error, wrong administration technique and physicochemical compatibility error’ (table 4).25 This wide variety of definitions could account in part for the diversity of error rates.
Studies that used prescription review showed that 2.2–11.2% of orders were associated with a ME (table 5).24 ,29–31 All methods involved pharmacists reviewing prescriptions and recording errors identified. Studies that reported lower error rates (2.2, 5.4, 5.9%) provided specific definitions of MEs, whereas the study by Ridley et al, reporting 11.2%, used a more vague definition of ‘prescriptions which did not follow standards given by the British National Formulary.’24 ,29–31 When ME rates were reviewed, they were 8.2, 497.5 and 2344 per 1000 patient-days for the three prescription review studies that reported these data.7 ,23 ,24 The methods used in two out of three studies were similar.7 ,23 The other study involved a pharmacist reviewing the medication administration record for the past 24 h and comparing it with doctors' orders.24 This difference, as well as the inconsistency in the definition of ME (table 3), may have contributed to the observed variation in rates.
Recommendations for methods of tracking MEs and ADEs
The IOM currently recommends different means of monitoring ADEs or MEs depending on what the institution hopes to achieve from the measurements.3 The recommendations are not specific for ICUs. If the institution wishes to track errors resulting in ADEs, chart review, voluntary and prompted self-report systems, and computer-generated ADE tracking are key recommendations.3 However, if the institution wishes to detect as many errors as possible in order to identify system problems to be fixed, observation, in addition to chart review and voluntary and prompted self-report, is recommended.3 Although advantages and disadvantages of each method are discussed, no gold standard is presented as the best method for tracking MEs or ADEs.
Our study confirmed that observation methods are very sensitive for detecting MEs and would be useful for the reasons noted above. We found that multifaceted techniques seemed to provide the most consistent tracking of ADEs, perhaps due to their rigorous nature of study. If the institution has the resources available to implement this type of approach, it seems quite useful in identifying errors associated with patient harm.
Utility of this study
The variability in error rates that we have observed in this review likely far outweighs the actual variation in MEs among ICUs. A recent review by Moyen et al confirms the frequency and severity of errors in the ICU, and confirms the importance of identifying system failures leading to MEs, so that these systems can be redesigned for improvement of patient outcomes.33 For this to occur reliably, standard definitions of MEs and ADEs must be adopted, and the methods for measuring rates of errors and adverse events should become standardised. These changes are also important in benchmarking these rates among different ICUs. There is a trend towards pay for performance healthcare currently; benchmarking may facilitate financial benefits for certain institutions if they have low event rates. All these reasons support the urgency to develop means for standardised reporting of MEs and ADEs.
There is wide variation in the definitions and rates of MEs and ADEs in ICUs, and in the methods used to detect them. Review of the literature showed that the ADE had a more reproducible definition than ME, as ADE denotes patient harm, while the interpretation of an ME can vary widely. Further standardisation of outcome definitions and methods of detecting errors must be done before the best methods for tracking ICU MEs and ADEs can be established.
Funding This work was funded by the Investigative Teams Program of the Michael Smith Foundation for Health Research (MSFHR), and NA is supported by a Scholar Award from the MSFHR.
Competing interests None.
Provenance and peer review Not commissioned; externally peer reviewed.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.