Article Text

Download PDFPDF

Patient safety research
Patient safety research: does it have legs?
  1. R J Lilford
  1. Correspondence to:
 Professor R J Lilford, Director of the Patient Safety Research Programme, Department of Health, and Professor of Clinical Epidemiology, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK;

Statistics from

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Patient safety research: where does it fit in?

Research into patient safety is highly topical. The Agency for Health Care Research and Quality spends about £40M per year under this heading and the UK has established a Patient Safety Research Programme which I direct. Patient safety research is somewhat unusual in that it works back from effect to cause; while most research asks about the effects of structures and process on outcomes, patient safety research starts with the outcome—iatrogenic injury—and asks how it might be avoided. A research programme with an emphasis on safety is needed to determine how and why safety is undermined and hence to develop and evaluate practices targeting safety as their main objective. However, many other research programmes concerned with improving quality generally will impact on safety and a dialogue with these programmes is essential. Similarly, managerial organisations with special responsibility for safety have come into being in many countries. Such organisations, which focus specifically on safety (such as the English National Patient Safety Authority (NPSA)), need to mesh with other organisations (such as the Commission for Health Improvement) responsible for quality generally. Patient safety can be seen as a kind of knowledge management, continually learning, educating and motivating. Patient safety programmes (whether research or managerial) have to be highly connected to the organisations they seek to influence and require a deep understanding, not only of scientific matters, but of the policy environment in which they work.

Patient safety agencies and research programmes have a special duty to reduce single acts which have serious consequences. Note that although the disaster can be traced directly to a single act, that act itself will have multiple antecedent “causes”. This, then, is where patient safety interventions get their bite; they intervene in the chain of events where the probability of the untoward event is the product of the probabilities of (independent) antecedent events. This leads us into consideration of the forms that research into patient safety might take.

Patient safety research has a role in:

  • Identifying the nature, extent, and context of iatrogenic injury (including errors of omission)

  • Uncovering the factors antecedent to injury, especially the underlying behavioural causes

  • Developing and evaluating interventions designed to reduce error

All of these involve a wide range of research methodologies.


Enumerating and categorising error can be done by counting reports (reporting systems, litigation records) or by investigating all cases of opportunity for error in an attempt to ascertain both numerator and denominator information and hence measure incidence. Yesterday's research project can be today's routine data system, and many countries have established standing mechanisms to solicit, record, and act on reports of untoward incidents. Such systems go beyond traditional reporting procedures for drug reactions, device failures, transfusion reactions, falls from bed, and needle stick injuries. The English programme has commissioned research into factors (especially cultural factors) that may affect willingness to report error (

However, such denominator free data underestimate many errors. This is important when management action is predicated, not just on the existence of problems, but on their incidence. Thus, it does not matter in terms of policy if drug calculation errors have been underestimated—there are far too many anyway and we need to act. But deciding whether to divert national resources to improve “pain to needle times” for patients with a heart attack or to reduce delay in operating on fractured neck of femur would require more accurate measurement of the scale of each problem. Unbiased measurement of error is also needed for comparative purposes—for instance, when monitoring the performance of healthcare providers or studying the effects of action to improve safety. Aggregated statistics are notoriously unreliable.1,2 Review of case notes is a widely used method for measuring error rates. However, it is subject to a number of identified biases3,4 and the problem that sicker patients have more opportunity for error. Enhanced technologies are being developed to measure error, such as digitally imaging endoscopic surgery and installation of cameras in operating theatre lights. The definition and unbiased measurement of error will be discussed at a forthcoming Anglo-American conference on methodological issues in patient safety research.


Deeper understanding of the causes of error builds on extensive work in other industries such as air traffic control, the nuclear industry, and others. Evolutionary selection did not equip the human mind for the complex technologies which it went on to create, so we are now prey to a disturbing range of psychological inadequacies.5 High risk industries reduce this problem by automation and close coupled systems. However, the continued presence of the human operator is required for those functions not easily automated and to intervene when events move outside system parameters. This latter, particularly, is a task for which human cognition is supremely ill suited. In these highly automated environments, where error is rare but catastrophic, the human operator is the “intelligent knowledge base” in the system, yet it is precisely this knowledge based problem solving which fails under stressful conditions leading, for example, to an incident at a nuclear plant in Ohio.6 On the other hand, well practised routine procedures which have become intuitive can also fail, for example, if the operator's attention is distracted—a factor identified as causal in some 6.5% of surveyed incidents in nuclear power plants.7 Nor do all errors originate at the operator level: the literature is littered with examples of failures attributable to organisational and cultural factors—despite two similar incidents, management at Three Mile Island nuclear power plant had done nothing to prevent its recurrence8; at Bhopal the plant superintendent was untrained for his job9; NASA top management cleared Challenger to launch because they were unaware of a launch constraint put in place by the NASA booster project manager10; the bosun of the Herald of Free Enterprise did not close the bow doors because “it wasn't part of his job”, even though earlier he had relieved from duty the crewman responsible for doing so11; and so forth. Qualitative research has proved invaluable in helping to unravel the complex social dynamics which determine safety in health services,12 and behavioural interventions have reduced accident rates in many industries.13


Basic research into the antecedents of injury should lead to development of interventions designed to reduce risk. The potential effects can be modelled from the epidemiology and the degree of confidence in the intervention. If the desired outcome is an almost inevitable consequence of an intervention, then agencies should simply act. For example, a number of recent deaths in England have followed the administration of undiluted potassium chloride. So, get the stuff off the wards and we will prevent these deaths. The effects of other interventions may be less certain and controlled before and after studies may be needed to provide really convincing evidence. In that case, a proof of principle study (analogous to a phase 1 drug trial) may be needed to refine the intervention. For example, I have put out a call for a study to determine the effects of various types of simulation and drill to improve the management of acute obstetric emergencies. A large trial randomising all the labour wards in the country must await the results of these initial studies.

In the end, patient safety will be enhanced by automating procedures that can be automated (e.g. interpretation of heart rate traces in labour ward/automated dispensing of drugs), trapping errors before they occur (e.g. online reminders), reducing pitfalls at the interfaces between care settings (e.g. by linking hospital and community prescribing systems), improving the design of procedures and equipment (e.g. delivery systems which preclude inadvertent intrathecal administration of neurotoxic drugs), and education (e.g. simulations to teach procedures). Culture seems to be improved by introducing specific measures of this sort (which then have beneficial knock on effects) rather than by non-specific exhortation.13,14 However, bringing about meaningful directed change requires resources and large scale managerial action. A large challenge for patient safety research is to work with managers to introduce change around an evaluation framework (preferably involving before and after measurements in both control and intervention sites). The role of patient safety research is to get the evidence about what is likely to work, then to proselytise for change based on that evidence and, above all, to encourage managers to innovate in such a way that the whole world may learn. However, we should be wary of inadvertently creating the problems we wish to avoid through an overzealous campaign: educational interventions designed to promote road safety awareness among school leavers have the consistent and apparently perverse results of increasing road deaths (mediated by the unexpected effect of prompting early acquisition of driving licences).

Managers and policy makers beware; the road to hell is paved with good intentions. So, promulgate plausible service delivery interventions, but first liaise with those who commission research so that epistemologically sound prospective evaluations can be built in from the start. The Patient Safety Research Programme in England will work closely with the NPSA and others to ensure this happens.


The author thanks Rachel Anderson (Birmingham, UK) and Paul Barach (Chicago, USA) for helpful advice on earlier drafts.

Patient safety research: where does it fit in?