Article Text

Download PDFPDF

Mountains in the clouds: patient safety research
  1. David W Bates
  1. Professor David W Bates, Brigham and Women’s Hospital, Boston, Massachusetts, USA; dbates{at}partners.org

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Do you see yonder cloud that’s almost in shape of a camel?

By th’ mass and ‘tis, like a camel indeed.

Methinks it is like a weasel.

It is backed like a weasel.

Or like a whale.

Very like a whale.

William Shakespeare, Hamlet, Prince of Denmark (Hamlet and Polonius, Act III, ii)1

It is becoming increasingly clear that patient safety represents an important issue globally, and the amount of research on patient safety is skyrocketing.23 Despite this, it is not clear how big the problem of patient safety really is—different studies have produced different results. For example, the estimates of deaths attributed to errors in the USA have been hugely variable, and hotly debated.4 But whenever studies which use multiple modalities to find injuries are done, it seems as if each modality turns up multiple injuries, with little overlap between those found with one modality and another, so that the bottom line point estimate clearly depends on the techniques used,56 and the true estimate may be higher than many people have realised. Furthermore, with technical advances such as intravenous “smart pumps”, which have a black-box capability to track interventions, whole new sets of errors can be found.7 The net outcome has been that sorting out how big an issue safety really is, has been like looking at mountains in the clouds—it is hard to tell where one thing begins and another ends, and reasonable observers may disagree as did Hamlet and Polonius.

Yet if we are ever to sort out how many patients are injured by the care we deliver, we clearly need to have tools to allow us to assess with confidence the size of the patient safety “mountains”, and doing this will require research with more rigorous methodology. Many of the differences in findings almost certainly relate to methodological issues. A key limitation in the evidence base is that almost all the epidemiological data about the incidence of harm come from the inpatient setting. Furthermore, we need to go beyond diagnosis of the safety problem and evaluate the potential solutions in various settings. Yet the entire discipline of patient safety research is a young one, and as a result the current series of papers811 represents an important milestone in this process.

The authors address a series of questions which come up again and again in the safety research community. For example, how does safety differ from quality? Many, but not all, have accepted the Institute of Medicine’s description—that safety is the first part of quality, and the two overlap and also differ. What is really different about patient safety research? What techniques are needed? To what extent is it just a subset of health services research? Do standard epidemiological techniques apply? Is it acceptable to measure error rates, or should one just assess how often patients are harmed? What are the relative roles of the qualitative and quantitative? How important is taking a human factors approach, and in particular what role should failure mode and effects analysis play? These latter issues in particular divide the patient safety research community, with many individuals working on either the quantitative side or the qualitative side, and sometimes throwing stones at those on the other side. Another divisive issue is the role of clinical trials with respect to patient safety research. The strict evidence-based medicine contingent has argued that trials are needed for everything, and because we have so few multicentre trials relating to safety, we do not know anything about it. Others have contended that a number of practices—especially those that are common sense and could result in fatalities—are so obvious that we do not need trials.12 In addition, it is a frequent problem in safety that outcomes are so rare that the standard trial approach is too costly to be practical.13

Although the series of papers in this issue of Quality and Safety in Health Care cannot definitively answer all the above questions, it does provide a set of recommendations that will be valuable to safety researchers. One of the papers deals with conceptualising and developing interventions.8 The authors make the key point that even though rare events such as wrong-site surgery get much of the media attention—and often institutional attention, especially after an accident—patients are much more likely to be harmed by mundane events such as hospital-acquired infections or adverse drug events. Another issue with event detection is that many safety events are never detected, and are attributed to other causes. One example is intravenous medication overdoses. Many of these are possibly attributed to decompensation related to the patient’s underlying conditions.7 The authors also discuss the causal chain through which an intervention may have impact, and suggest looking for outcomes at various places in the chain. They also make the point that preimplementation evaluation is important—this is probably especially true for patient safety interventions.

Another paper in the series addresses the types of study that can be used, and makes the point that one size does not fit all.11 In fact, interventional patient safety studies in particular will often benefit from including both a qualitative and quantitative approach, as in part this may increase the likelihood that a successful intervention will be developed and that it is possible to interpret why it did or did not work.

The third paper discusses measurement of safety and quality.10 Safety measurement in particular is still difficult: measures that routinely identify a large proportion of safety issues are still not generally available, and additional work needs to be done to standardise measures for research. This paper discusses the conundrum of whether to count errors or adverse events, and the extent to which all errors are likely to be similar to those which actually cause injuries. It also discusses the thorny methodological issues relating to bias, implicit versus explicit review and assessment of preventability. The authors put forward the interesting contention that advantages and disadvantages are not properties of measures but the context in which they are used.

Another paper presents a synthesis of the above9; it goes through the use of mixed methods—which are especially useful in patient safety research—and discusses how information can be synthesised in a bayesian statistical framework. Although most will probably not use this technique, it is certainly conceptually attractive.

There are certainly many important areas that do not get addressed related to patient safety research, which were beyond the scope of this exercise. One is the issue of human subjects—for example, institutional review boards often have difficulty accepting that patients will be randomised to a “less safe” arm, even if it represents usual care. Another is how best to gather data that will allow prioritisation of various practices, which are badly needed—this issue is mentioned but not discussed in detail. However, overall, this series represents perhaps the best such set of articles to date, addressing some of the key controversies and methodological issues in patient safety research, in addition to offering thoughts about whether patient safety research is sufficiently distinctive to represent its own discipline. Further work on these issues is essential if we are to better understand patient safety and map its topography.

Acknowledgments

I thank A Wilcox for assistance with preparation of the manuscript.

REFERENCES

Footnotes

  • Competing interests: None declared.

Linked Articles

  • Quality lines
    David P Stevens