Article Text

How reliable are clinical systems in the UK NHS? A study of seven NHS organisations
  1. Susan Burnett1,
  2. Bryony Dean Franklin2,
  3. Krishna Moorthy3,
  4. Matthew W Cooke4,
  5. Charles Vincent5
  1. 1Centre for Patient Safety and Service Quality (CPSSQ), Division of Surgery and Cancer, Department of Surgery, Faculty of Medicine, Imperial College London, London, UK
  2. 2Centre for Medication Safety and Service Quality, UCL School of Pharmacy and Imperial College Healthcare NHS Trust, London, UK
  3. 3Upper Gastrointestinal Surgery, Division of Surgery and Cancer, Department of Biosurgery and Surgical Technology, Imperial College London, London, UK
  4. 4Warwick Medical School, Heart of England NHS Foundation Trust, Coventry, UK
  5. 5Centre for Patient Safety and Service Quality (CPSSQ), Division of Surgery and Cancer, Department of Surgery, Imperial College London, London, UK
  1. Correspondence to Susan Burnett, Centre for Patient Safety and Service Quality, Imperial College London, Faculty of Medicine, Room 508 Medical School Building, St Mary's Campus, Norfolk Place, London W2 1PG, UK; s.burnett{at}imperial.ac.uk

Abstract

Background It is well known that many healthcare systems have poor reliability; however, the size and pervasiveness of this problem and its impact has not been systematically established in the UK. The authors studied four clinical systems: clinical information in surgical outpatient clinics, prescribing for hospital inpatients, equipment in theatres, and insertion of peripheral intravenous lines. The aim was to describe the nature, extent and variation in reliability of these four systems in a sample of UK hospitals, and to explore the reasons for poor reliability.

Methods Seven UK hospital organisations were involved; each system was studied in three of these. The authors took delivery of the systems' intended outputs to be a proxy for the reliability of the system as a whole. For example, for clinical information, 100% reliability was defined as all patients having an agreed list of clinical information available when needed during their appointment. Systems factors were explored using semi-structured interviews with key informants. Common themes across the systems were identified.

Results Overall reliability was found to be between 81% and 87% for the systems studied, with significant variation between organisations for some systems: clinical information in outpatient clinics ranged from 73% to 96%; prescribing for hospital inpatients 82–88%; equipment availability in theatres 63–88%; and availability of equipment for insertion of peripheral intravenous lines 80–88%. One in five reliability failures were associated with perceived threats to patient safety. Common factors causing poor reliability included lack of feedback, lack of standardisation, and issues such as access to information out of working hours.

Conclusions Reported reliability was low for the four systems studied, with some common factors behind each. However, this hides significant variation between organisations for some processes, suggesting that some organisations have managed to create more reliable systems. Standardisation of processes would be expected to have significant benefit.

  • Clinical systems
  • reliability
  • patient safety
  • accreditation
  • health policy
  • healthcare quality improvement
  • quality improvement
  • medication error
  • medication
  • medical error
  • medication safety
  • continuous quality improvement
  • emergency department
  • prehospital care
  • safety culture
  • root cause analysis
  • risk management

This is an open-access article distributed under the terms of the Creative Commons Attribution Non-commercial License, which permits use, distribution, and reproduction in any medium, provided the original work is properly cited, the use is non commercial and is otherwise in compliance with the license. See: http://creativecommons.org/licenses/by-nc/2.0/ and http://creativecommons.org/licenses/by-nc/2.0/legalcode.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Delivering high reliability has been a focus of safety conscious industries such as aviation and nuclear power for many years with impressive results.1 It has also become important in many other industries to the extent that we now assume our microwave oven and mobile phone will work every time we use them and that our car will always start. The area in which airlines may not be seen as reliable is in handling our luggage; here the worst performer in a recent online report2 was said to have lost 28 items for every 1000 passengers. Put another way, reliability was seen as poor at 97.2%. In healthcare the situation is very different and it is well known that many systems have poor reliability.3 4 Some studies have found reliability as low as 50% in delivering recommended evidenced-based care for clinical conditions.3 5 Different patient characteristics may explain some variation but it might reasonably be expected that the routine processes that support clinical care, such as ensuring relevant information is available to doctors in clinics, will have high reliability.

Most previous studies of reliability in healthcare have focused on specific care processes in isolation. Differences between these studies make it difficult to compare their results and to identify common factors behind poor reliability due to the different ways in which the concept of reliability has been interpreted and applied. As a result, the size and pervasiveness of poor reliability in the UK NHS and its impact on staff and patients has not been established systematically. The purpose of our research therefore was to examine the reliability of four important and common clinical systems in UK hospitals and to investigate the causes of poor reliability. The systems were: availability of clinical information in surgical outpatient clinics; prescribing for hospital inpatients; availability of equipment in theatres; and availability of equipment needed for the insertion of peripheral intravenous lines.

The systems studied were selected to represent those known to be important to clinicians and where evidence exists of system failure. For example, in the UK over 39 000 reports were received by the National Patient Safety Agency relating to failures in documentation in 20076 and in Australia7 1.8% of medical errors were found to be due to the unavailability of clinical information. In a survey of theatre team members, respondents believed that nearly 10% of errors in the operating theatre were related to equipment problems.8 Equipment problems are also likely to cause disruption of workflow, delay case progression and lead to deterioration in the dynamics between team members. It is estimated that 1–2% of hospital inpatients are harmed by medication errors, the majority of which are errors in prescribing.9 10 One in three UK hospital inpatients has at least one peripheral venous catheter.11 The incidence of infection associated with these is usually low; however, due to the high frequency with which peripheral catheters are used, serious infectious complications produce considerable morbidity.11 12

Our aim was to describe the nature, type, extent and variation in the reliability of the four healthcare systems in a sample of UK hospitals using the same methods to identify common issues and themes. We took the delivery of the intended systems outputs to be a proxy for the reliability of the system as a whole. For example, the system designed to deliver equipment in working order to a theatre for a specific operation was seen as having failed if the equipment was not available when the surgeon needed it. This paper reports a synthesis of the findings from more detailed studies of each of the four systems and highlights the common systems failures.

Methods

Design

A prospective descriptive study of the reliability of four systems, conducted in seven NHS organisations. Three of the seven organisations were participating in phase one of the Health Foundation's ‘Safer Clinical Systems’ programme.13 Four additional organisations were selected to increase the breadth of the sample in terms of geographical spread and other characteristics. Analysis of a range of publicly available data for these organisations demonstrated that they represented a range with respect to their safety performance (online appendix)14 and the findings therefore likely to be applicable to the wider NHS.

Each system was studied in three of the seven organisations, chosen on the basis of interest in the topic and to spread the workload. The study employed a mixed methods approach. For each system, we conducted a quantitative assessment of reliability, supplemented by a series of semi-structured interviews with key people in each organisation exploring the causes of systems failures. Data collection took place in spring/summer 2009.

Quantitative assessment of reliability

Data on reliability were collected for each of the four systems. Briefly, we identified key areas within each process in which to collect data relating to reliability. Specific methods for data collection were determined according to the system concerned, and through discussion with study organisations and the clinicians involved. Some data were collected directly by the research team, and some by nominated local leads. Members of the research team trained staff in participating organisations in the methods of data collection when needed and conducted all analyses. The definitions against which reliability was measured and the methods of data collection for each system are summarised in table 1. As part of data collection, we also asked those identifying problems to estimate their likely impact on patient safety and delays to patient care. More specific details of the methods used for each are presented elsewhere.14 16 17

Table 1

Summary of quantitative data collection methods by system

Qualitative exploration of system failures

For each system we explored the causes of the poor reliability identified, using qualitative semi-structured interviews with key informants in each organisation. Potential interviewees were identified by local study coordinators, given a participant information sheet and invited to sign a consent form. Interviews were mainly conducted face to face, although some were conducted by telephone, depending on availability and preference of the interviewee. We made audiotapes of interviews if possible, or took detailed notes if interviewees preferred. These interviews were then transcribed and analysed using the accident causation model18 as the theoretical framework. For each topic, a sample of at least one in four interview transcripts was checked by a second researcher. Associated subthemes were then drawn from these data and verified by a second researcher. The themes and subthemes from all systems were then grouped under the headings in the accident causation model. Following discussion between all researchers, the findings were synthesised across all systems, drawing out common themes across systems.

Results

Overview of systems reliability

We found overall reported reliability to be between 81% and 87% for the systems studied (table 2). However, in some cases these figures hide significant variation between organisations. For example, reliability for the availability of clinical information ranged from 73% in organisation E to 96% in organisation A (p<0.001; χ2) and the availability of equipment in theatres ranged from 63% in organisation D to 88% in organisation F (p<0.001).

Table 2

Reliability of each clinical system measured

Implications for patient safety and delays to care

When systems failed, clinical staff often considered there was a threat to patient safety. In the outpatient clinics, 15% of 1161 patients had some type of relevant clinical information missing, and of these, surgeons considered that 20% were associated with a risk of harm. For inpatient prescribing, we found errors in 15% of 6605 medication orders, of which 19% were predicted to have serious consequences to the patient if not corrected. In the operating theatre, 19% of 490 operations were affected by equipment problems, and of these, 21% were associated with threats to patient safety. Finally, problems occurred in 13% of 350 cannula insertions, of which 23% were judged to have some impact on patient safety.

Approximately 10% of the 490 operations were delayed, some by over 30 min, because of missing or faulty equipment. In outpatient clinics, missing clinical information led to 1.7% of 1161 patients being given a repeat appointment. A total of 69% of the prescribing errors required the ward-based pharmacist to contact a member of medical staff, and/or write in the patient's medical notes to resolve the problem.

Factors contributing to poor reliability

A total of 51 interviews were conducted across all systems and organisations. The contributory factors were grouped using the categories in the accident causation model14:

  • Organisation and management factors: a lack of accountability or ownership of issues, for example, no one taking responsibility for the contents of a patient's medical record or for resolving equipment problems in theatres; rather they blamed others for any problems. Also difficulties arising out of normal working hours such as access to information or supplies.

  • Work environment: the design of systems (such as using a mixture of paper and computer records) and workspace (such as storerooms where equipment for intravenous line insertion is kept, making it hard to find items and to see if stock needs replenishing).

  • Team factors: poor communication, including a lack of feedback mechanisms, for example, prescribing errors being corrected by a pharmacist and the individual doctor not informed, poor documentation of medication changes in patients' health records, and problems with stock control for cannulation equipment not being reported to those with responsibility for this.

  • Individual staff factors: here the main issues were a lack of training; poor or no induction into clinical areas; and a lack of familiarity with how systems worked.

  • Task factors: no standardisation, for example, in how certain drugs are prescribed or discontinued on the paper chart, and how equipment is stored in theatres. Also a perception of over-complexity of processes, for example, systems for obtaining health records and off-site preparation of equipment.

Some additional organisational factors, while less pervasive than those above, were also identified. These included the challenges associated with managing ‘outlier’ patients on remote wards and unclear handwriting. Similar issues were found relating to information in outpatient clinics and prescribing, both involving ordering care (medication, tests or investigations) and issues involved in communicating these requests in a timely fashion. As would be expected, there were also common issues identified between the systems for obtaining operating theatre equipment and the equipment needed for the insertion of intravenous cannulae. There was also a lack of feedback about stock levels, including poor communication about requirements, lack of clarity about responsibilities for ordering and checking stock levels, and lack of systems to automatically highlight when stock levels were incorrect. Lack of resources was also raised as an issue in relation to obtaining equipment needed in operating theatres, but to a much lesser extent for the other systems studied.

We found that in many areas, over time, staff had come to accept poor reliability as normal, thus not reporting or challenging problems; for example, acceptance of missing equipment in theatres or wrong equipment in cannulation packs.

When asked how cases of poor reliability were dealt with, in some cases staff described the workarounds they had developed, for example, by obtaining information from patients rather than their health records, or using disposable gloves as tourniquets, for which the risks could not directly be assessed. In some cases, risks were taken such as making clinical decisions without information and transferring used sharps to sharps bins in remote locations.11–13

Strengths and limitations

The strengths of this study are that we studied each process in three organisations, using standard methods and definitions. While participating organisations were not randomly selected, the use of multiple organisations increases generalisability compared with most similar studies, which have been based at only one site. We have also been able to synthesise common factors across more than one process, using a published and widely used model of the factors that affect clinical practice.

The main limitation is that quantitative data collection was based on self-recording by hospital staff. This may result in under-reporting. However, we tried to minimise the extent of any under-reporting by choosing processes in which poor reliability is an annoyance for the staff, making them more likely to be motivated to report problems. We also kept data collection periods relatively short to reduce data collection fatigue, made data collection forms as simple to complete as possible, and offered help and support through the research team and local coordinators.

There may also be some response bias in the qualitative data collected in the interviews, in relation to the selection of interviewees and/or their responses. Potential interviewees were identified by local coordinators on each site, and it is possible that they may have identified interviewees who were interested in this type of work or likely to be amenable to participation.

Discussion

Overall the reported reliability in the clinical systems studied was between 81% and 87%. Put another way, the clinical systems studied failed on 13–19% of occasions. In everyday life this would mean your car not starting 1 day every week or your luggage being lost once in every four overseas trips you take. Would you still use email if you knew that one email in seven would not reach its intended recipient? For a UK hospital these figures mean doctors dealing with missing clinical information for one in every seven patients seen in clinics; missing or faulty equipment in one of seven operations performed (two in every five operations in some organisations); and time wasted by nurses and pharmacists correcting problems and searching for records or equipment for four or five patients every day on a typical 30-bed ward. We also found that about 20% of reliability failures were associated with a potential risk of harm. On this basis it is hardly surprising that patient safety is routinely compromised in NHS hospitals,19 20 and that clinical staff come to accept poor reliability as part of everyday life. Furthermore the systems studied are only a small part of the total healthcare delivery process for individual patients and it is very likely that the total reliability for the whole clinical pathway will be orders of magnitude lower than the 81–87% found for the individual systems in this study.

The variation in reliability between organisations, with much higher levels of reliability identified in some, suggest that it is possible to create more reliable systems, although even these can probably be improved upon. This variation also illustrates the danger of averaging performance across organisations. Common factors causing poor reliability were found across systems. These included a lack of feedback mechanisms; lack of standardisation; and issues such as access to information or supplies out of normal working hours. Other factors common to more than one system include stock control, handwriting and the management of ‘outlier’ patients on remote wards. This would suggest that improving common system factors in organisations could have a bigger impact on patient safety than current approaches focusing on individual areas of risk.21 For example, creating good feedback systems would influence all the clinical systems studied here and also many other areas of clinical work. Improving stock control mechanisms would also have an impact across a number of key clinical systems, making the lives of clinical staff less frustrating and improving patient safety. More important perhaps is the need to develop a culture of challenge and feedback so that poor reliability and the associated potential for patient harm are no longer accepted by staff as part of normal everyday work. Industries such as aviation and oil production have a track record of success in this area, standardising processes and equipment whenever possible, understanding the human factors involved and working to improve safety climate. Many NHS organisations have started work to introduce these practices, for example, using the WHO surgical safety checklist; using lean manufacturing methods; training teams in crew resource management; and standardising care using clinical pathways. However, our research indicates that there is much more work to do before the UK NHS can claim to offer patients levels of reliability in their care to match even the best within other hospitals, let alone other safety conscious industries.

Based on the approach of the US Institute for Healthcare Improvement (IHI), reliability of <80–90%, as in our study, indicates a lack of any articulated common process, whereas reliability of around 95% suggests the presence of a clearly articulated process.4 On this basis, our results indicate that for healthcare organisations in the UK to begin to improve the reliability of their core processes, they need to articulate or document the process as it is expected to function and define the required outputs. This is a prerequisite for measuring levels of reliability and for understanding where processes fail.

For two systems, availability of information in outpatients, and availability of equipment in operating theatres, patients experienced a delay to their care as a result of poor reliability. This indicates scope for savings with improved reliability since every new appointment and every delay in theatre is likely to add to healthcare costs. There are also broader opportunity costs that may be substantial, such as suboptimal care, adverse events, loss of confidence by patients and referring general practitioners, and the potential for medical negligence claims.

Finally further research is recommended to build on the work presented here using standardised methods and definitions to study reliability across multiple organisations. More work is also needed to understand how and why clinical systems fail with particular emphasis on the systems, including organisational context and behaviours, rather than isolated focus on process redesign. Such research would contribute to understanding the context within which specific solutions are introduced, such as care bundles. In these times of financial stringency, healthcare would benefit from further studies to explore the economic consequences of poor reliability and to quantify the link between reliability and harm.

Conclusions

Considerable attention has been given to reliability in healthcare in recent years, most notably in safety initiatives such as the IHI's ‘5 million Lives’ campaign in the USA,22 and ‘Patient Safety First’ in England.23 Here the focus has been on reliably delivering specific care requirements to patients, such as on-time antibiotics and prophylaxis of venous thromboembolism. Less attention has been given to the reliability of the overall system of care within which these care requirements reside. From our findings it is clear that improving the reliability of clinical systems must become a priority for hospital leaders to reduce costs and improve patient safety, for example, making all clinical information available for every patient seen in clinics and correct and functioning equipment for every patient in theatres. The authors hope that presenting these findings will raise the profile of systems reliability and its associated but unseen impact on staff and patients.

References

Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

    Files in this Data Supplement:

Footnotes

  • Funding The study was commissioned and funded by the Health Foundation (Registered Charity Number 286967) as part of their work to examine systems reliability in healthcare and its effects on patient safety. The Centre for Patient Safety and Service Quality is supported by the UK National Institute of Health Research. The researchers are independent from the funders.

  • Competing interests All authors have completed the Unified Competing Interest form at http://www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declare that (1) all authors have support from The Health Foundation for the submitted work; (2) all authors have no relationships with The Health Foundation that might have an interest in the submitted work in the previous 3 years; (3) their spouses, partners, or children have no financial relationships that may be relevant to the submitted work; and (4) none of the authors have non-financial interests that may be relevant to the submitted work.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data sharing statement The full research report is available on the Health Foundation web site. Further information can be obtained from the authors.