Intended for healthcare professionals

Analysis

Is health care getting safer?

BMJ 2008; 337 doi: https://doi.org/10.1136/bmj.a2426 (Published 13 November 2008) Cite this as: BMJ 2008;337:a2426
  1. Charles Vincent, professor of clinical safety research1,
  2. Paul Aylin, clinical reader 2, assistant director 3,
  3. Bryony Dean Franklin, director4, professor of medication safety5,
  4. Alison Holmes, director of infection prevention and control6,
  5. Sandra Iskander, manager1,
  6. Ann Jacklin, chief of service7, visiting professor5,
  7. Krishna Moorthy, clinical senior lecturer8, consultant in general surgery6
  1. 1Imperial Centre for Patient Safety and Service Quality, Department of Biosurgery and Technology, St Mary’s Hospital, London W2 1NY
  2. 2Division of Epidemiology and Public Health, Imperial College, London
  3. 3Dr Foster Unit, Imperial College
  4. 4Centre for Medication Safety and Service Quality, Imperial College Healthcare NHS Trust, London
  5. 5School of Pharmacy, University of London, London
  6. 6Imperial College Healthcare NHS Trust
  7. 7Pharmacy and Therapies, Imperial College Healthcare NHS Trust
  8. 8Clinical Safety Research Unit, Imperial College
  1. Correspondence to: C Vincent c.vincent{at}imperial.ac.uk
  • Accepted 31 October 2008

Despite numerous initiatives to improve patient safety, we have little idea whether they have worked. Charles Vincent and colleagues argue that we need to develop systematic measures

Patient safety has been high on the national and international agenda in health care for almost a decade. In the United Kingdom, reviews of case records have shown that over 10% of patients experience an adverse event while in hospital,1 2 a figure reflected in similar studies around the world.3 Considerable efforts have been made to improve safety, and it is natural to ask whether these efforts have been well directed. Are patients any safer? The answer to this simple question is curiously elusive. Although some aspects of safety are difficult to measure for technical reasons (defining preventability for instance), the main problem is that measurement and evaluation have not been high on the agenda. We believe that the lack of reliable information on safety and quality of care is hindering improvement in safety across the world.

The principal approach to patient safety in the UK, United States, and many other countries has been to establish local and national reporting systems; these systems invite voluntary reporting of unspecified safety incidents with the aim of learning lessons and feeding back the findings into the system. However, these reporting systems do not effectively detect adverse events. In the most recent comparison, reporting systems detected only about 6% of adverse events found by systematic review of records.2 Reporting systems are a valuable component of a safety system, but they are essentially systems for warning and communication inside an organisation and, if large scale, of detecting rare events not easily detectable by other means. They cannot and never will act as a measurement system for safety.

Here, we use the example of the UK National Health Service to determine whether it is possible to assess change in several core areas that reflect the safety of health care and, if so, what changes are apparent. We focus on measures of outcome, in the sense of definable events that happen to patients (infections, morbidity, mortality) and on key measures of process (such as drug errors). We have not considered concepts such as culture or resilience that are held to reflect safety but are not proved indices of clinical process or outcome. Defining safety is itself a challenge, and we do not pretend that the indicators can provide more than a crude measure of overall levels of safety. The indicators we have chosen are, however, all important to patients.

In-hospital mortality

Hospital standardised mortality ratios show that in-hospital mortality has fallen significantly over the past 11 years.4 When measured against mortality in 2000-1, the ratio has fallen from 114 in 1996-7 to 82 in 2006-7. The ratio is adjusted for several factors including age, sex, diagnosis, whether the admission is planned or unplanned, socioeconomic deprivation, comorbidity, and season, so these changes are not simply due to different types of patients being admitted to hospital. Shorter admissions and changing discharge policies may have some bearing on the reduction in hospital mortality, as could more general trends in mortality (both in and out of hospital), and a general increase in longevity.5 The overall picture, although difficult to interpret, suggests that care is at least as safe and may be improving.

Mortality after surgery

The Society of Cardiothoracic Surgeons has collected data for over 20 years. There is evidence of improved outcomes in cardiac surgery, with a reduction in mortality in the north of England from 2.4% in 1997-8 to 1.8% in 2004-5.6 The Scottish Audit of Surgical Mortality has separated unavoidable deaths from those in which adverse events contributed to the death, providing an indirect measure of safety. These data show a clear trend downwards, suggesting that efforts to increase involvement of consultants in decision making and improve interaction between surgical, anaesthetic, and intensive care teams have borne fruit (fig 1).7 Similar trends have been observed in data from the National Confidential Enquiry into Patient Outcomes and Death8; however, it is not clear whether this has translated to improved overall outcomes.

Figure1

Fig 1 Percentage of deaths during surgery in which adverse events in management were identified as cause, 1994-20067

Safety indicators

The United States Agency for Healthcare Research and Quality has made important advances by adding safety indicators to its existing set of quality indicators,9 though there are few long term trend data as yet. These indicators have now been translated for use with English administrative data.10 Deaths in healthcare resource groups (groups of clinically similar treatments and diagnoses) expected to have a low mortality (<0.5%) seem to be decreasing significantly, in line with trends in national all cause mortality (fig 2). The incidence of foreign bodies being left during a procedure is also decreasing slightly, but this indicator remains suspect as retained sutures are sometimes wrongly coded.11 The remaining indicators are all increasing, suggesting that care may be getting less safe. However, at this stage of development the most likely explanation for the observed trends is improved coding; payment by results seems to have given a strong incentive to trusts to improve their coding.12 13

Figure2

Fig 2 Changes in rates of nine Agency for Healthcare Research and Quality derived patient safety indicators. Hospital Episode Statistics 1996-7 to 2005-6, England

Healthcare acquired infection

The recognition that reliable data and public confidence could never be assured by voluntary reporting systems has led to a steady increase in the number of healthcare acquired infections for which mandatory reporting is required. Mandatory reporting for methicillin resistant Staphylococcus aureus (MRSA) bacteraemia has been required since April 2001, and for Clostridium difficile since January 2004. Reporting is public and transparent, with monthly reports having to be signed off by the trust’s chief executive. This produces strong pressure for both accurate data and actual reduction of infection. Hospital acquired infection has moved from being a side issue tackled by small harassed infection control teams to a major organisational priority.14 15 16

Voluntary reporting of both MRSA and C difficile saw steadily rising rates in the 1990s, in part because of improved detection, surveillance, and reporting (fig 3).17 The introduction of mandatory reporting and the accompanying infection control initiatives are now reducing MRSA infections nationally, particularly in the acute teaching trusts. The latest data from the Health Protection Agency suggest that rates of C difficile are now falling, though the agency expresses some caution about whether this can be sustained in the longer term.18

Figure3

Fig 3 Trend in methicillin resistant Staphylococcus aureus bacteraemia reports received by voluntary and mandatory surveillance schemes in England, 1990-200517

Drug errors and adverse events

Several UK studies have been published on the rate of drug error (tables 1 and 2). Rates of administration error are not decreasing, and may even be increasing; no trend is apparent for rates of prescribing error. However, in both cases direct comparison is limited as the studies were conducted in different settings and used different methods.19 20 Adverse drug events have many causes, and it will never be possible to reduce such events to zero. Nevertheless, many are undoubtedly preventable, and the overall level of adverse drug events would be an important indicator of the safety of any healthcare systems. Serious harm arising from adverse drug reactions should be reported through the yellow card system and harm arising from drug errors through the national reporting and learning system, but, in both cases, there is likely to be significant under-reporting. More comprehensive data can be obtained from reviews of medical notes,2 20 21 but studies at regular intervals would be needed. We have no idea at the moment of national rates or trends for adverse drug events.

Table 1

 UK data on drug administration errors

View this table:
Table 2

 UK data on inpatient prescribing errors19

View this table:

Effect of lack of measures

The data summarised above present a mixed picture. Although there are some difficulties of interpretation, there is reasonable evidence for a reduction in overall hospital mortality and in mortality after certain types of surgery. There is also good evidence for a fall in rates of MRSA, and possibly also C difficile. Of the nine safety indicators, seven show an increase which may reflect increasingly unsafe care or, more probably, better coding. For medication errors, adverse drug events and indeed most other safety issues in the NHS we simply have no idea of long term trends.

The lack of reliable data on safety, and indeed quality, over time hinders improvement efforts at every level of the NHS and makes it impossible to determine the effect of the numerous safety initiatives. At a national level we have no way of assessing the results of national initiatives and campaigns that, remarkably, are launched with little thought of evaluation. The National Audit Office22 has pointed out that it cannot assess value for money for patient safety because, although it has financial information, there is no safety information to set it against. At an organisational level boards, that are accountable for safety and quality, have no reliable way of monitoring the safety and quality of care in their organisations.

At the level of the clinical directorate and the clinical team, the problem is more acute still. The recent King’s Fund inquiry into the safety of maternity services23 pointed out that if clinical teams are to ensure or improve safety and quality they must have data on their performance and an opportunity to reflect on the trends and features of those data over time. League tables and national figures are less important than an assessment of how a team is doing compared with last month and last year. Some obstetric teams had assembled such data, but others were completely in the dark about their performance, making any systematic work on safety or quality almost impossible.

Finally, we need to consider why it is so hard to engage clinical staff in safety and quality initiatives. Clinical staff care very much about safety and quality; on an individual level it is at the heart of everything they do. However, this generally does not extend to engaging in improving the performance of the broader system. One reason for this is that staff do not appreciate the extent of the safety and quality problems found in major studies, or do not believe they apply to their departments.24 There is little hope of real engagement without systematic collection of local data that are relevant to clinical concerns and widely disseminated and discussed throughout the clinical team.

What needs to be done?

The absence of solid measurement of safety, and indeed quality, is a worldwide problem. Measuring safety in health care is much more difficult than measuring safety in other domains, where mistakes and injuries are fewer, less varied, and can be more clearly defined. Outcomes remain the clearest reflection of harm and are of most concern to patients. However, when adverse outcomes are rare, we may also need to measure process data (such as use of prophylactic antibiotics), which are associated with better outcomes. Looking further ahead, we might wish to assess the levels of hazard, the ability of systems to recover when errors occur, and indices such as safety culture or staffing levels which might reflect overall safety of systems.

At the policy level we need a large shift of emphasis and resources, away from unsystematic voluntary reporting towards systematic measurement. We suggest that the National Patient Safety Agency, supported by the Healthcare Commission, should make a major effort to develop measures of safety and quality. This would require increased coordination of national audits, a process already begun with the Patient Safety Observatory, and exploration of new methods of assessing trends in adverse drug events and other safety critical issues. For instance, it would be relatively straightforward to carry out an annual review of case records to monitor the overall trends in adverse events and see whether any progress has been made; however, we now need to move towards much more specific indices of error and harm that can be monitored and targeted. For drug errors, a national screening programme is needed in which a suitable sample of patients is studied each year using identical methods and definitions. While top down targets may play a part in driving quality, the most urgent need is to collect a broad but manageable spectrum of indicators that are genuinely useful to the clinical teams that monitor quality and safety day to day.

Existing information systems could record safety indices with only minor adjustment. For instance, small changes in coding would allow clinicians to routinely identify the proportion of patient admissions due to adverse drug events, something which is currently done only for research purposes. This would allow clinicians to receive relevant, timely, informative, well presented analyses. The development of electronic medical records provides considerable potential for obtaining safety data, but much remains to be done to develop valid approaches for routine monitoring and detection of error and harm.

The careful attention to epidemiology and monitoring, which would be a first priority for cancer or heart disease, has been completely neglected when dealing with the safety and quality of care. The Darzi review has placed a welcome emphasis on the overall quality and safety of care, requiring healthcare organisations to give the same attention to safety and quality data as they currently give to financial information. Unless serious efforts are made to develop reliable indices of safety and quality, we will still be unable to answer the question posed by this paper in five years’ time.

Notes

Cite this as: BMJ 2008;337:a2426

Footnotes

  • We thank Rene Amalberti, Bruce Barraclough, and Robert Wears for their comments on the manuscript.

  • Contributors and sources: CV has studied and reported widely on issues related to patient safety and clinical error, including adverse events and incident reporting. PA has done research on quality and safety indicators, BDF and AJ on medication error, and AH on the management of healthcare acquired infection. CV developed the first draft of the paper, coordinated the contributions and is guarantor for the article. PA, BDF, AJ, SI, AH, and KM wrote sections relating to their specific expertise. All authors contributed to the concluding sections and final draft of the paper.

  • Competing interests: None declared.

  • Provenance and peer review: Not commissioned; externally peer reviewed.

References