Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Adverse events—“instances which indicate or may indicate that a patient has received poor quality care”1—are used widely in healthcare quality measurement and improvement activities. Many commonly employed quality improvement mechanisms, such as incident reporting, occurrence screening, significant event auditing, processes for dealing with complaints, and (in the UK) the national confidential enquiries into various areas of clinical care are essentially focused on such adverse events. Even traditional medical quality improvement mechanisms such as mortality and morbidity conferences or death and complications meetings are predicated on the idea that by identifying and examining adverse events, we can learn lessons and change practice in ways that will make such events less likely in future and hence improve the quality of health care.
The principle that studying adverse events can produce information which leads to quality improvements is far from new and has been much used outside of health care.2, 3 It has an intuitive power—after all, we all learn much as individuals from our own mistakes, and it seems reasonable to hypothesise that organisations can also learn a great deal from their errors. However, it is easy to overlook the complexities of measurement involved in defining, classifying, identifying, describing, and analysing such adverse events.4 Like any other measurement tools, those used with adverse events need to be tested to ensure that they work. This article presents an analysis of the issues involved in defining adverse events, the sources of data which can be used to identify such events, and the validity and reliability of measures of quality based on adverse events in health care.
The idea that it would be useful or important to study the incidence, circumstances, or causes of adverse events in health care arises from various different but related schools of thought. For example, researchers concerned with the level and impact of iatrogenic disease,5, 6 those interested in measuring and improving quality,7, 8 others investigating medical malpractice, negligence, and litigation,9, 10 and some interested in the human and organisational psychology involved11 have all developed approaches to studying adverse events in health care. Researchers have examined different aspects of the epidemiology of adverse events—their consequences for patients, the costs for healthcare organisations, the perceptions of clinicians and others involved in these events, the causes and factors which contribute to their occurrence, their preventability, their use in performance measurement, and so on. Some common themes can be identified. Researchers concur that adverse events are important and worthy of study and investigation because they are quite prevalent (and more prevalent than might be expected), have important impacts on healthcare organisations and patients, and are often apparently preventable. Researchers also seem to agree that the study of adverse events should look beyond the performance of the individual clinician, and recognise the importance of the wider process of care and the organisational context in which it takes place.
Defining adverse events in health care
The starting point for any measure based on adverse events must be a definition of what constitutes such an event. Several different researchers have developed definitions for the term adverse event, and table 1 lists some of the principal ones.
Reviewing these definitions, it is clear that they largely agree that an adverse event is a happening, incident, or set of circumstances which exhibits three key characteristics to some degree:
Negativity: it must be an event which is, by its very nature, undesirable, untoward, or detrimental to the healthcare process or to the patient. This is a theme which is common to all definitions
Patient involvement/impact: it must in some way involve or have some negative impact or potential impact on a patient or patients. The wider definitions of adverse events include occurrences in which there is no actual effect on any patient, though there is the potential for harm. More restrictive definitions often only include events where the patient has suffered some definable and identifiable ill effect from the event
Causation: there must be some indication that the event is a result of some part of the healthcare process (either through commission or omission), rather than a result of events outside the healthcare process, such as the patient's own actions or the natural progression of the disease. Again, definitions vary, with some accepting events as adverse events with little or no evidence of causation, while others insist on strong and direct evidence of causation.
The definitions listed in table 1, however, are of limited value in actual measurement because they do not define the circumstances or events which constitute an adverse event in sufficiently concrete terms to allow such events to be identified reliably. To this end, most measures of adverse events make use of a series of statements or criteria which operationalise the definition by describing a series of circumstances or instances which are seen as adverse events. Table 2 shows an example of such a list. Behind each example listed in the table might lie, in turn, a further description or definition setting out the details of what constitutes such an adverse event. Some use such a list of further definitions largely as a prompt list, and rely on the professional skills and experience of the person using the measure to decide whether a particular instance is or is not an adverse event, while others use a second stage of review by a senior clinician for this purpose. Some have endeavoured, however, to define the nature of each type of adverse event in specific terms to allow the measure to be used reliably by many different users without, necessarily, a high level of clinical expertise.
When adverse events are identified using a measure or tool such as that in table 2, they are then often analysed or categorised further in various ways. Their effect on the patient involved may be rated, in terms of their severity and temporal persistence, and their effect on the organisation and the costs of health care may also be considered. The cause of the event may be explored in an attempt to distinguish between those which arise from the healthcare system and those which may result from the underlying disease process or from other causes, and in order to attribute events to particular parts of the healthcare organisation. The avoidability of events or the acceptability of the standard of care provided may also be rated to make an explicit professional assessment of the quality of care. Sometimes, an assessment of the existence of negligence (which is a medicolegal rather than a clinical judgment) may be made.
The classification of adverse events in these ways is almost always done through some form of professional review. The rigour with which those reviews are undertaken varies—from those which are simply based on a single professional's personal and implicit assessment of the circumstances, to those which use multiple professional assessments, made with explicit criteria and definitions of the concepts involved. Investigations of the reliability and validity of this review process have indicated that consistent intra-rater and inter-rater reliability are elusive, and that the achievement of reliable judgments may demand more multiple ratings than are practically feasible.16–18 Such professional reviews can also be biased by the knowledge of case outcomes.19
Sources of data on adverse events
One approach to identifying adverse events in health care is to monitor or screen patients' clinical records either during or after the process of care. Information is abstracted from the clinical records by staff who use the records to decide whether or not adverse events have occurred and to document and classify those events. There are two important weaknesses in this process. Firstly, the clinical records may be deficient, and as a result adverse events might be missed. Indeed, because the more deficient the medical records are the harder it will be to identify adverse events, the paradoxical situation could occur in which good, comprehensive records produce a higher adverse event score (and so an indication of a lower quality of care) than sketchy, incomplete records. Secondly, the clinical records are always a summary of events in the patient's care and treatment rather than a record of every action and incident. Some adverse events might concern circumstances which are not routinely recorded in the clinical record, and so reliance on the clinical record as the sole source of information might produce a spuriously low indication of their incidence.
Another source of information on adverse events is the self reporting of incidents by clinical professionals. Indeed, most healthcare organisations have at least some reporting mechanisms for a range of adverse events such as medication errors and patient accidents.20, 21 If the reliability of the clinical records is a concern, however, the reliability of reporting mechanisms which rely on many different professionals to report adverse events, all of whom may have different personal definitions of what constitutes an adverse event and different degrees of commitment to the self reporting mechanism, must be even more in doubt. Some researchers have reported that adverse event reporting misses many adverse events which records screening would identify,13, 22 though others have found the two methods to be equally productive.23
With the increasing availability of information technology in hospitals, some researchers have used available data held within computer systems to identify fairly limited groups of adverse events,24, 25 but have found that unless wholly computerised clinical records are available, most adverse events cannot be identified in this way.
When adverse events are used in quality measurement, some applications simply identify individual events or case series of similar or related events which are explored using qualitative methods,8 often based on the ideas of the critical incident technique.26 Many make use of some kind of quantitative analysis, however, in which adverse events are counted, aggregated, converted into rates, standardised for various factors, etc. In both cases, it is important to consider the validity and reliability of the definitions and measures being used.
Validity of adverse event measures of the quality of health care
FACE AND CONTENT VALIDITY
Face validity is a measure of whether an instrument seems reasonable, and produces reasonable data, from the viewpoint of its users. Content validity is a measure of whether the items within an instrument adequately reflect the conceptual definition of its scope.27 Few studies of the face and content validity of adverse event measures exist, but those that do (table 3) suggest these measures are valid. One survey of 150 doctors in public health and clinical medicine in the UK found broad support for the validity of a generic adverse event measure, although participants suggested many improvements to the detail of the measure's definition.15 A parallel interview study found that although the principle of using adverse events in measuring quality was supported, clinicians had some concerns about the practice leading to an undue focus on such events unless other dimensions of quality were also measured. A small US study used a panel of three doctors to rate the “adversity” of each element of the adverse patient occurrences inventory,28 and reported that “all of the weights obtained were negative, and the physicians generally agreed with one another in their evaluations”.
CRITERION RELATED VALIDITY
Criterion related validity is a measure of the relation between measurements made using an instrument and an external variable (the criterion, sometimes called the gold standard) with which it is expected to correlate.27 In assessing the criterion related validity of an adverse event measure of healthcare quality, the most important and difficult issue is the selection of an appropriate criterion. The few researchers who have studied the criterion related validity of adverse event measures have mostly used some form of implicit professional assessment of the quality of care as their criterion. Although this is obviously simpler to do than identifying a separate explicit measure of the quality of care as the criterion, the acknowledged low validity and reliability of such implicit professional judgments16, 17, 29 present some difficulties.
Studies of the criterion related validity of adverse event measures generally support their validity, with some provisos (table 3). They suggest that adverse event measures may suffer from high false positive rates—identifying many cases as having adverse events when in fact they contain no real quality problem, and that different individual adverse event definitions may have quite different validity characteristics. This implies that the validity of adverse event measures may be crucially dependent on the mixture of adverse events they contain, and that attempts to develop or test adverse event measures should pay attention to the individual items included within the instrument. They also suggest that adverse event measures may provide a valid measure of quality only for those patients whose illness is relatively severe.
Construct validity is a measure of how well an instrument supports or conforms with theories or constructs.27 Exploring the construct validity of adverse event measures of quality is difficult because there are few established theories and constructs about the distribution and effects of adverse events for researchers to test. However, as table 3 shows, researchers have shown that adverse event rates are associated with increases in length of stay and resource usage, that different sorts of adverse events are associated with different levels of increase in length of stay, that rates of adverse events vary between specialties, that patients who are more severely ill on admission, patients who are emergency admissions, and patients who die in hospital have more adverse events, and that rates of adverse events vary between hospitals and are higher in some types of hospitals than in others. Overall, there is reasonable evidence for the construct validity of adverse event measures of quality.
Reliability of adverse event measures of the quality of health care
Inter-rater reliability measures whether, when the same test is applied to the same respondent or subject by different raters, the same results are produced.27 Several researchers have examined the inter-rater reliability of adverse event measures, generally by arranging for multiple reviews of patients' case records by different screening staff and then comparing the results of screening. Their results (summarised in table 4) are mixed, but most studies indicate that the reliability of adverse event measures is at best moderate to good, and that the reliability of measurement may be highly dependent on the quality of rater training and ongoing monitoring as well as the construction of the measure.
Intra-rater reliability measures whether, when the same test is applied to the same respondent or subject by the same rater on two different occasions, the same results are produced. One study has examined intra-rater reliability and has found moderate reliability.
Adverse events are clearly important to healthcare organisations, not only because of their impact on patients but also because they can provide an insight into the quality of health care and an opportunity for improvement. Adverse events can, as individual instances of care, provide an information-rich and compelling case for action and improvement, and in aggregate they can be used to identify and explore important variations in performance. Perhaps because of the direct connection between adverse events, patients' healthcare experience, and the process of care itself, it can be argued that it is easier to use such information to bring about changes in organisational or clinical practice than it is with other types of information about the quality of care. Clinicians readily recognise the importance of adverse events and see the opportunities for improvement that they present.
However, the negativity of adverse events which makes them a powerful tool in quality improvement also makes it important that quality measurement does not solely focus on such events. The risk exists that a rather biased view of quality, focused on outlier events, technical quality, and patient safety issues could result.38 For the clinicians involved, focusing on adverse events to the exclusion of other things could be dispiriting and demotivating.
Furthermore, there are some important practical problems involved in using adverse events in quality measurement, especially in any quantitative sense. Firstly, developing definitions of adverse events which are both meaningful and can be reliably applied in measurement is difficult. Some measures rely on (and give scope for) the professional judgment of the person applying the measure, but this probably compromises reliability and may make unjustified assumptions about the knowledge and skills of that person. Other measures use more specific and detailed definitions of each type of adverse event, but these may become rigid and maximise reliability at the expense of validity. Developing definitions of adverse events which are both valid and reliable when used in measurement has proved difficult.
Secondly, most approaches to detecting and using adverse events in healthcare quality improvement make extensive use of professional review—sometimes against explicit criteria but more often on implicit basis—both to identify such events and to analyse causation and make assessments of impact and other issues. Such implicit professional reviews are often not reliable, however, and are easily biased by extraneous circumstances or information.
Thirdly, most adverse event measures are not well tested by their developers to assess and demonstrate their validity and reliability or other characteristics of their behaviour. Even where such evidence exists it appears that when measures are taken up and used by others they may not always achieve the levels of validity and reliability achieved during development, and that in particular some ongoing monitoring of reliability is needed to sustain performance.
In conclusion, although adverse events in health care provide important and useful insights into the healthcare process which can certainly be used to great effect in promoting quality and performance improvements, some caution should be exercised, especially when they are used in measurement, either quantitatively or qualitatively.