Article Text

Download PDFPDF

Adverse drug events
Adverse drug events: what's the truth?
Free
  1. B Dean
  1. Director, Academic Pharmacy Unit, Hammersmith Hospitals NHS Trust and the School of Pharmacy, University of London, London W12 0HS, UK; bdean{at}hhnt.org

    Statistics from Altmetric.com

    Request Permissions

    If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

    Reasons for the wide range in reported adverse drug event rates include discrepancies in the definitions and data collection methods used. Great care must be taken when interpreting the results of studies of adverse drug events and other types of medical harm, and standardised methods and definitions are needed to compare adverse drug event rates.

    You don’t have to look very far to find that the number of patients being harmed by medication is perceived to be a problem. Nearly every medical, pharmaceutical, and nursing journal frequently publishes articles to this effect. Key documents on medical error—drawing particular attention to the harm caused by medication—have been produced by the US Institute of Medicine and by both the Department of Health and the Audit Commission in the UK. Add to this the widespread coverage at professional conferences and in the media, and it is clear that adverse drug events (ADEs) appear to represent an epidemic.

    What is less clear is how often ADEs actually occur. An enormous range of figures have been reported in the literature and are cited regularly—suggesting that ADEs occur in anything from 0.7% to 6.5% of hospital inpatients.1,2 In this issue of QSHC a further paper is published in which 720 ADEs were identified in 2837 inpatients (25%).3 So why this range of figures and, perhaps more pertinently, does this mean some institutions are safer than others?

    Before considering this question it is important to pause for a minute to think about what is being measured, as the definitions and terminology used in the area of iatrogenic harm are notoriously confusing. ADEs refer to instances where patients are unintentionally harmed as a result of drug use. This includes harm that occurs due to either an adverse drug reaction or a medication error.4 Medication errors are generally considered to be preventable whereas adverse drug reactions (or side effects, in common parlance) are less so. Medication errors may or may not result in ADEs, and a separate but overlapping body of literature examines these in more detail.

    Returning to our question of why such a range of ADE rates has been reported, there are three possible reasons. The first is that, within the general definition of an ADE given above, there is wide discrepancy in what is considered to constitute “harm”. For example, in the Harvard Medical Practice study,1 one of the most well known studies of iatrogenic injury, harm was defined as “measurable disability at discharge or increased length of stay due to the event”. This study therefore included only events that resulted in more serious levels of harm. The US based ADE Prevention Study Group did not define the level of harm they included, but suggest that “all” ADEs were studied; only 8% of the ADEs they identified met the definition used in the Harvard study.2 The paper by Rozich et al3 also suggests that any degree of harm was included.

    The second possible reason is that a wide range of data collection methods have been used. The Harvard study and similar Australian and UK studies5,6 were based on a retrospective review of medical notes. There are many reasons why ADEs may not be documented in the medical notes, and this method may therefore lead to underreporting. The ADE Prevention Study Group instead used targeted self-reporting and daily medical record review, an approach which is likely to identify more ADEs than a retrospective review of medical notes but may still miss those that are not recognised as such or otherwise neither reported nor documented. Another approach is to develop a computer based system to prospectively screen for ADEs based on “triggers”—that is, results of laboratory tests or orders for medication that may indicate that an ADE has occurred. The medical notes for those patients with positive triggers can then be examined in more detail. Using this method, Classen et al7 found an ADE in 1.7% of patients. The method described by Rozich et al3 in this issue of QSHC is based on this approach, but involves manually screening for triggers instead of requiring an ADE screening programme to be integrated with computerised prescribing and results reporting systems. These methods may be useful to find evidence of ADEs that are neither reported nor documented clearly in the medical notes, but any ADEs that do not result in a trigger will be missed.

    The third reason why there may be differences in reported ADE rates is that there may be differences in the underlying ADE rates in the different institutions. However, without a standardised method for identifying ADEs we do not know the extent to which this is the case. The data of Rozich et al suggest that the differences are not great, with a range of 2.47–4.81 ADEs per 1000 doses reported across the 86 hospitals studied (mean 2.68).

    These issues clearly demonstrate two points: firstly, that great care needs to be taken when interpreting the results of studies of ADEs and other types of medical harm; and, secondly, that we desperately need standardised methods and definitions to compare ADE rates in different institutions and in the same institution following large scale changes designed to reduce them. As well as being practical for routine use, such a method would have to be tested in terms of its validity and reliability. The extent to which a method could be used in countries outside the one in which it was developed would also require careful consideration; prescribing practice, laboratory reference ranges, and drug names can differ immensely. These issues represent major challenges for those wanting to show a reduction in the number of patients being harmed by drug use.

    Reasons for the wide range in reported adverse drug event rates include discrepancies in the definitions and data collection methods used. Great care must be taken when interpreting the results of studies of adverse drug events and other types of medical harm, and standardised methods and definitions are needed to compare adverse drug event rates.

    REFERENCES

    Linked Articles