Article Text

Download PDFPDF

Fixing patient safety: Are we nearly there yet?
  1. Peter McCulloch
  1. Nuffield Department of Surgical Science, Oxford University, Oxford, UK
  1. Correspondence to Peter McCulloch, Nuffield Department of Surgery, Oxford University, Oxford OX3 9DU, UK; peter.mcculloch{at}nds.ox.ac.uk

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Reducing harm in hospital care using Human Factors and Quality Improvement approaches has proved harder than expected: better evaluation of our efforts, a more realistic understanding of the challenges we face and an intense focus on engaging staff are the key elements needed for progress.

Patient safety was not a recognised term in medical research parlance until the 1990s. Prior to this, avoidable harm from treatment was assumed to be rare, and failure was commonly attributed to the incompetence or lack of diligence of individuals. The emergence of convincing evidence that around 10% of hospital inpatients suffered serious harm from their treatment stimulated alarm, and a search for a rapid solution to this huge, previously unnoticed problem.1 Analyses of adverse events showed that their causes were usually complex, system-based and to some extent stochastic, echoing the typical findings of professional accident investigations in the transport and energy sectors. It seemed likely that systematic analysis of the underlying problems would result in effective solutions which could drastically reduce harm from treatment, and the concept of the high reliability organisation became hugely popular.2 Following the lead of civil aviation, healthcare professionals became enthusiastic about using ergonomics (Human Factors science) to solve the safety problem. Decades on, progress has been incremental, and studies of harm show results not dissimilar to those from the 1990s.3 4

So why are we not there yet?

There are several answers to this question. I focus here on the ones I think are most important, respectfully recognising the subjectivity in my position. First, there is a difference in the commitment of management and policymakers at the most senior level in healthcare when compared with leaders in airline companies and fossil fuel producers, for whom the massive financial consequences of a major accident were ever-present in safety decision-making. Second, the Human Factors approaches we tried to adopt from other industries had developed organically over time to fit particular contexts, and over-literal translation to healthcare environments was often a poor fit.5 Third, our evaluation of our own efforts has been consistently weak, making it hard to learn the right lessons. Finally and importantly, the modern healthcare industry, by its very nature, poses underlying wicked problems of interaction between structure and culture, which make change very hard. I describe each of these in turn below. Some points in this viewpoint are well-evidenced, while others are based on experience and perceptions, but I hope that the arguments provide a helpful and potentially creative opportunity for readers to react and reflect.

Management and relentless organisational pressures

Most scientific studies of interventions to improve patient safety have involved external groups as the driving force, and the involvement of leaders within the organisation has varied greatly. In many cases, they are permissive rather than actively supportive at the most senior level, usually limiting themselves to approving words but supplying minimal direct assistance. A key reason for this is because healthcare organisations, at least those in the public sector, are consistently faced with pressures to operate at or near maximum capacity, reducing management room for manoeuvre. Allowing staff time off for training is difficult, and suspending activities even for a short time virtually impossible, unless senior management plays an active role. Unless the initiative addresses an urgent threat to the viability of the organisation they are unlikely to do this. Middle management and clinical leadership are also critically important; safety projects should be aligned with their goals, but in many hospitals they operate in survival mode, making this difficult to acheive.6 The bureaucratic structures within healthcare institutions can encourage hierarchy, defensiveness, rivalry and caution, whilst the over-riding need to cope with demand and maintain income restricts activities necessary for change, including planning, training and structural reform. This encourages a conservative hierarchical decision-making structure, which depends on agreement between many actors and is dominated by the imperative of not affecting activity in the short term. From a management point of view, all this makes change a distinctly unattractive idea. You may or may not be rewarded if you take the initiative. The existing system will undoubtedly make the process extremely difficult, and your career may be severely damaged if you fail.

Human factors: the can of worms

Innovators cannot be held accountable for not finding the perfect solution immediately—this is rare in any context. Human Factors researchers started by standardising processes, adopting checklists and modifying a team training philosophy from civil aviation to improve team communication and co-operation.7 8 They were immediately faced with questions to which only trial and error could provide answers. What is the correct dose and duration for team training? Should we try to standardise everything across the system or focus on key events? How do we measure success? How do we ensure compliance? Is there data to show change? Is it better to fix the system by standardising or to improve team relationships and effectiveness? We found that if we tried to fix every problem, the complexity of the solution and the resource requirements defeated us. If we focused too narrowly, our impact on patient outcome was small. Team training approaches designed for aviation produced measurable effects on internal team process and function, but the impact on patient outcomes remains hard to demonstrate.9 A striking and repeated finding was that staff engagement with the intended changes was highly variable and often weak.10

Some lessons learnt from other fields were applicable and helpful. The importance of codesign of changes with frontline staff and the value of short iterative cycles of experimentation were products of the ‘lean’ Quality Improvement philosophy, which had some remarkable successes and underpins the work of several successful healthcare quality and safety organisations. Not surprisingly, combining attempts to improve teamwork with systems redesign seemed to do better than either alone.11 The research community gradually realised that we were dealing more with a sociocultural challenge than with a technical problem in process design. This refocused interest in the patient safety field onto culture and how to change it—but no reliably effective, evidence-based, generalisable solutions to this age-old question have yet emerged from our work.

(Not) doing the science right

Measuring the effects of complex interventions on human work processes is challenging by its nature, and we have not helped ourselves by doing it much less well than we could have. Changes to complex processes need first to be trialled and iteratively improved, then tested in a wide range of settings before they can undergo a definitive evaluation of their benefit, and we have rarely followed this kind of stepwise evaluation pathway in patient safety work.12 The impact of some of the most important major initiatives has been blunted by study designs which have left the validity of their claims uncertain. The pivotal study on the WHO surgical checklist was a short term, open label, non-randomised before–after comparison with evaluation performed by the team carrying out the study.7 ,12 13A subsequent observational whole-system study in Canada showed no benefit, and a later, better designed study showed no significant mortality reduction.13 14 The largest ever study of teamwork training compared trained and untrained units halfway through a multihospital programme, but selected units for training priority on the basis of ‘readiness’, thus introducing major bias.15 Randomised studies have been dismissed as impossible by leading figures in this field, but several have been done—and like randomised studies in other fields have tended to disappoint their authors. This often reflects the lack of support for adequate stepwise preparatory studies to ensure that the RCT is feasible.12 Even having an independent control group is uncommon—and in the rare cases, where it has been done has demonstrated its value in reducing overoptimistic interpretation of results.16 The value of parallel qualitative process evaluation was initially ignored, but where it has been done, the insights into why things worked or did not have often been compelling.17 Clearly, we need to do better science to make better progress. It may be more expensive, but if it yields more reliable answers, we may end up getting there faster in the long run.

The wicked problems: complexity, pressure and culture

This brings us to the wicked problems, which are interlinked. Healthcare staff attitudes, organisational structures and the stress of constant high demand can interact to produce a culture of fear, risk aversion, denial and arrogance, as reported in numerous investigations of systemic failures in hospital care. How does this happen? The rigid hierarchical management structure noted above is an understandable response to the need to maintain activity constantly near the theoretical maximum, and severely limits capacity for and interest in systems change. The cultural aspects of professional formation for clinical staff, however, may also have paradoxical negative effects on clinical engagement with systems change. Both medical and nursing training are steeped in an idealistic but very person-centred set of values including diligence, duty, perfectionism and selfless beneficence.18

This has served the NHS enormously well, by inducing countless talented people to work far harder for far less reward than they would otherwise have done. But the implied converse side of this "heroic" model is the ‘shame and blame’ mindset in which adverse outcomes are attributed to individual failure, making staff fearful, defensive and judgemental, and inhibiting their acceptance of systems-based solutions.19 20 Frontline clinical staff working in this kind of organisation learn that attempts to change the system encounter great difficulties, are disruptive to their normal work patterns and usually fail. Because of the risks to leaders who take on change management directly, projects are usually driven by external academics, experts or consultants, who are seen by staff (sometimes correctly) as remote and unfamiliar with the realities of work in their environment, and are clearly not acting under the direction of senior management. It is easy to see how apathy and cynicism can flourish in these circumstances. This symbiotic relationship between a defensive tribal culture and a change-resistant bureaucracy is incapable of delivering high-quality care but is very hard to change, and, therefore can be highly durable over long periods of time.

So what can be done?

The wicked problems are embedded in the system and culture so deeply that only radical reform of both will lead to sustainable major change. This may be necessary at an institutional or whole system level. Some examples of how this can be done, and can be transformative, exist but to experiment with and implement this kind of change across a large health system would take considerable boldness and change management skills.21 22

If we cannot tackle the wicked problems, we should downgrade our expectations, but we can still achieve a good deal. We have learnt from our mistakes over the last 30 years, and we can incorporate the lessons into our work. Enabling organisational improvements needs to start with a thorough understanding of ‘work as done’ and the gap between this and ‘work as imagined’.23 Systematic use of analytical tools to understand work processes is helpful in redesigning them. All changes need to be codesigned with frontline staff, whose implicit knowledge of the system is impossible for outsiders to reproduce.24 Expecting a single intervention to transform performance is fantasy, but trying to change everything is a recipe for exhausted failure. In a complex interconnected system, the limits of intervention are difficult to define, and too narrow a focus may predestine failure. Hence, projects which concentrate efforts on the staff in a particular specialty area may need to widen their scope and include, for example, both their patients and the other departments whose help they need as partners. Finally, staff engagement will remain the greatest challenge. The importance of careful thought and detailed planning in enhancing engagement cannot be overemphasised. Much can be done by using professional approaches to communication and having well-liked and respected ambassadors.25 Efforts to improve a sense of belonging and common purpose in the clinical team are logical as part of a strategy. The long-established principles of diffusion of innovations, and of social influence generally, need to be taken seriously as essential components of any project which intends to bring about change.26 This is a key area for future research.

Conclusion

So when it comes to patient safety we are not nearly there yet, but we have travelled through the territory and learnt a lot on the journey. It is harder than we thought, but the lessons learnt do point to what success requires. The focus of our efforts needs to be directed far more towards finding out how to engage frontline staff actively in change management, since without their support, nothing works. Radical transformation will require fundamental reform of the system, but better science, which directly involves management in studies at a whole-institution scale can take us further, and perhaps provide the impetus to stimulate the necessary policy change.

Key messages

  • Harm and suboptimal outcomes due to imperfections in care remain stubbornly frequent in modern hospital care, despite three decades of study and attempts at intervention.

  • Efforts to improve safety using a Human Factors approach remain rational but have been impeded by over-reliance on modification of strategies from very different contexts, poor evaluation, lack of genuine management support and interdependent aspects of hospital staff culture and decision-making processes.

  • Frontline staff engagement is the key ingredient for success and is often difficult to generate. Understanding how to achieve it should be the main focus of our efforts.

  • Radical improvement may require radical reform of management structure and process in order to change staff culture.

Ethics statements

Patient consent for publication

Ethics approval

Not applicable.

References

Footnotes

  • X @McCullochP

  • Contributors This article draws on experience of the challenges of implementing quality and safety changes in healthcare organisations in England and the US. Professor PM is a surgeon by background and has led multiple research groups focused on improving the safety of surgical care and quality of surgical research. He would like to acknowledge the assistance of Ms Olivia Lounsbury, a US quality and safety practitioner and researcher focused on removing barriers to implementation of safety changes in healthcare organisations. The piece was first conceptualised from discussions between PM and Ms Lounsbury, and both consulted a variety of international healthcare journals for evidence on the topic. Ms Lounsbury commented on and edited drafts, but declined to be an author. Professor PM is the article’s guarantor, and declares that the opinions expressed in it are his alone.

  • Funding The author has not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.