Elsevier

Health Policy

Volume 71, Issue 2, February 2005, Pages 223-232
Health Policy

Patient safety — how much is enough?

https://doi.org/10.1016/j.healthpol.2004.08.009Get rights and content

Abstract

Awareness of errors in health care has skyrocketed in recent years, and huge resources have been mobilised to measure and reduce the harm. This is a good thing, and long overdue. But current improvement recommendations have ignored the costs of prevention and have prioritized improvements by the rigour with which they have been studied. The current proliferation of safety goals and required or recommended safe practices threatens to overwhelm the capacity of hospitals to safely implement change, yet the cost-effectiveness of most proposed improvements remains unknown. Unless we collect information on cost-effectiveness, and use it to prioritize both improvement initiatives and new safety research, society will not gain the maximum return (in terms of safety) for whatever resources are put into error reduction. This would be a bad thing.

Hospitals are complex systems, largely dependent on human performance, so improving hospital safety is not simple. Every change must be implemented with an understanding of human factors engineering and safety science, and even good changes can create unexpected new hazards. Increased safety precautions reduce preventable adverse events but generally impose both direct costs (to implement the safety precautions) and hidden costs (in the form of delays, new errors, or lost opportunities elsewhere). Perfect safety is not always possible and near-perfect-safety may impose unacceptably high costs. The goal of minimizing the total cost of both accidents and accident-prevention requires information on both costs and effects of specific safety improvements. Such information is also needed to prioritize suggested safety improvements, when all cannot be implemented immediately. This evidence can best be produced using the economic evaluation loop, an iterative process involving routine, periodic, assessment of costs and effects, and targeted original research where initial estimates reveal uncertainty in key values.

Introduction

Since the Institute of Medicine reports [1], [2] we are all now very aware of the incidence and costs of preventable adverse events in health care. Hospitals have been inundated with recommendations and requirements from accreditation agencies, government departments, drug and equipment vendors, and improvement organizations. Yet little guidance is available on the relative priority of various changes, and costs and effects of most improvements have not been carefully studied. If we are to gain the maximum reduction in harm for whatever resources hospitals can put into safety, we need to collect and use cost-effectiveness evidence both to prioritize proposed safety improvements and to target new research. This paper proposes a method to generate the necessary evidence.

Section snippets

The old paradigm: the person approach

Attempting to avoid errors has been part of health care since its beginnings. Medical and nursing students are taught “First, do no harm” [3], and trained to take personal responsibility for patient safety. The person approach [4] sees errors as a failure of duty by health care providers; in this “blame, shame, and name” model, providers who are caught making errors are assumed to be incompetent, to be careless, and to deserve punishment often including banishment from providing health care in

The new paradigm: the system approach

Modern health care is a complex system, and errors are normal in complex systems [9]. The vast literature about safety in aviation (another complex system) is beginning to be reported in the medical literature [10], [11]. However, health care remains years behind other systems in learning about error reduction [1]. During the 1990s there was an explosion in information about errors and error reduction in health care, including research from the fields of cognitive psychology and human factors

Pursuing perfection?

“First, do no harm” drives health care providers to seek perfect, harm-free performance. Yet modern medicine, like modern life generally, cannot always be made perfectly safe. Holding perfection as an ideal is inspiring and perhaps necessary to discourage complacency [18]; but there can be great harm in trying to achieve it, because near-perfection often imposes near-infinite costs. The closer we get to perfection in any particular area, the more likely it becomes that we could have gotten a

Costs and benefits — why economic evaluation?

If more safety means less of “something else”, it becomes essential to ask “which something else?” What must we give up, to make health care safer? There is no one answer to this question. For example, different safety improvements may:

  • cost very little, and create huge health care savings by avoiding patient injuries that can be corrected, but at great cost [22];

  • cost very little, and save life, but produce little in direct health care savings;

  • cost a great deal, and produce only small reductions

Setting priorities

When costs are not considered, high costs can result. Most health regions can adopt only a fraction of recommended improvements due to constrained finances and limited staff time to safely implement changes. This creates a danger of adopting improvements not in order of their cost-effectiveness, and real examples exist: the universal precautions (recommended by the US Centers for Disease Control to prevent worksite transmission of HIV to health care workers) apparently cost from $ 100,000 to $

Producing the evidence

Since even information has a cost, we need to obtain information in order of its cost-effectiveness. The economic evaluation loop (EEL) proposed here (Fig. 2) outlines a process that will ensure we direct our scarce research efforts to the areas where new information has greatest value.

In Step 1, an initial estimate is made of base-case (expected) costs and benefits (in QALYs) for all alternative strategies. This initial synthesis combines existing research, from whatever source, and may

Value added? An example

Why do we need the economic evaluation loop (EEL)? Does not research automatically produce the needed knowledge? What is added by this new framework that could possibly be worth creating yet another acronym?

We need the EEL because critical information gaps are not obvious until economic appraisal has been attempted, and sensitivity analysis performed. If we ignore costs, and focus only on effectiveness, we can easily generate a long list of “good things to do” yet have no idea which things

The way forward

What is the solution? Leape et al. [36] recommend that patient safety improvement should be based on “reasonable judgements based on the best available evidence”. They also note that “some effective practices are costly, and it would not be practical to implement all of them. Thus, it is necessary to have a method for prioritizing them whether or not they have been subjected to randomized trials. Methods for prioritizing safety practices should be a key area for future research.”

The EEL

A tale of two changes

Comparing two safety changes, one implemented several years ago and one announced recently shows the need for good evidence on costs and effects to support rational policy.

Conclusion

Basing safety improvement recommendations only on existing research evidence will not promote cost-effective reductions in patient harm. Use of the economic evaluation loop to set both implementation and research priorities for patient safety is necessary and sufficient to allow us both to gain the maximum harm reduction for our safety improvement spending, and to recognize “enough safety” when we see it. Bring on the economists!

Acknowledgements

The author thanks Dr. David Feeny (University of Alberta); Dr. Peter Norton (University of Calgary); Dr. Malcolm Maclure (University of Victoria); Dr. Robert C. Lee (University of Calgary); and the journal's anonymous reviewer, for helpful comments on evolving drafts of the paper. The author remains solely responsible for content, errors, or omissions. The author gratefully acknowledges career and research funding support from the Michael Smith Foundation for Health Research, Vancouver, BC,

References (46)

  • Institute of Medicine (IoM). To err is human: building a safer health system. Washington, DC: National Academy Press;...
  • Institute of Medicine (IoM). Crossing the quality chasm. Washington, DC: National Academy Press;...
  • Nightingale F. Notes on hospitals. London: Longman, Green, Longman, Roberts, and Green;...
  • J. Reason

    Human error: models and management

    British Medical Journal

    (2000)
  • L.L. Leape

    Error in medicine

    Journal of the American Medical Association

    (1994)
  • D. Hilfiker

    Facing our mistakes

    New England Journal of Medicine

    (1984)
  • A.W. Wu

    Medical error: the second victim

    British Medical Journal

    (2000)
  • Koop CE. An ounce of error prevention. The Washington Post; December 23, 1999. p....
  • Perrow C. Normal accidents: living with high-risk technologies. New York Basic Books;...
  • R.L. Helmreich

    On error management: lessons from aviation

    British Medical Journal

    (2000)
  • J.B. Sexton et al.

    Error, stress, and teamwork in medicine and aviation: cross sectional surveys

    British Medical Journal

    (2000)
  • Haas D. In memory of Ben — a case study. Joint Commission Perspectives (Joint Commission on Accreditation of Healthcare...
  • J. Cooper et al.

    Preventable anesthesia mishaps: a study of human factors

    Anesthesiology

    (1978)
  • D. Gaba et al.

    Anesthetic mishaps: breaking the chain of accident evolution

    Anesthesiology

    (1987)
  • D.M. Gaba

    Anaesthesiology as a model for patient safety in health care

    British Medical Journal

    (2000)
  • VA Health Care Network (VA), Upstate New York. The Veterans Health Administration garners praise nationally. VA Network...
  • L.L. Leape

    Forward

  • Berwick DM. Is it wise to promote “perfection” as a goal? Institute for Healthcare Improvement, Continuous Improvement,...
  • R.N. Warburton

    Editorial: what do we gain from the sixth coronary heart disease drug?

    British Medical Journal

    (2003)
  • D.L. Sackett et al.

    Evidence-based medicine: how to practice and teach EBM

    (1997)
  • Expert Group on Learning from Adverse Events in the NHS (EGLAE). An organization with a memory. London, UK: Stationery...
  • D.M. Berwick

    As good as it should get: making health care better in the new millennium

    (1998)
  • E.S. Patterson et al.

    Improving patient safety by identifying side effects from introducing bar coding in medication administration

    Journal of the American Medical Informatics Association

    (2002)
  • Cited by (49)

    • Physicochemical and gel properties of agar extracted by enzyme and enzyme-assisted methods

      2019, Food Hydrocolloids
      Citation Excerpt :

      In addition, sodium hypochlorite, oxalates, and other chemicals are used as bleaching agents to obtain agar with a perfect pure white color (Li et al., 2009). Nevertheless, the chloride gas and its effluents produced by bleaching can not only detrimentally affect workers' health but also pose a major threat to the environment (Warburton, 2005). In addition, the difficulty in maintaining a constant pH during chemical bleaching results in unstable agar quality (Li, Yu, Jin, Zhang, & Liu, 2008).

    • Conventional and alternative technologies for the extraction of algal polysaccharides

      2013, Functional Ingredients from Algae for Foods and Nutraceuticals
    • Economic evaluation of diagnostic localization following biochemical prostate cancer recurrence

      2014, International Journal of Technology Assessment in Health Care
    View all citing articles on Scopus
    View full text