Article Text
Statistics from Altmetric.com
Summary
Healthcare worldwide is faced with a crisis of patient safety: Notwithstanding occasional successes in relation to specific harms, safety as a system characteristic has remained elusive. We propose that one neglected reason why the safety problem has proved so stubborn is that healthcare suffers from a pathology known in the public administration literature as the problem of many hands. It is a problem that arises in contexts where multiple actors—organisations, individuals, groups—each contribute to effects seen at system level, but it remains difficult to hold any single actor responsible for these effects. Efforts by individual actors, including local quality improvement (QI) projects, may have the paradoxical effect of undermining system safety. Many challenges cannot be resolved by individual organisations, since they require whole-sector coordination and action. We call for recognition of the problem of many hands and for attention to be given to how it might most optimally be addressed in a healthcare context.
Introduction
Every day, everywhere, patients are injured during the course of their care.1–3 But the puzzle of how to keep patients safe has remained stubbornly difficult to solve, despite huge optimism, effort, investment, public pressure and some occasional successes in relation to specific harms over the past 15 years or more.4 We suggest that one neglected reason for slow progress in patient safety lies in a pathology known in the public administration literature as the problem of many hands. First described by the political philosopher Dennis Thompson,5 the problem of many hands was originally developed in the context of public officials. His concern was the challenge of how responsibilities can be allocated for the decisions and policies of government when so many different officials contribute in so many ways that it is difficult to identify the causal contribution of any single individual. Summarised in the old aphorism that ‘if everyone is responsible, no-one is’, the idea is not a new one.6 But Thompson's diagnosis, developed into the more general observation that a collective in its entirety may have responsibilities that cannot be attributed to any individual member of the collective,7 has stimulated new attention to this enduring conundrum.
The problem of many hands is now understood to arise in many contexts where multiple actors—organisations, individuals, groups—contribute to the performance seen at the system level, but no single actor can be held responsible for the overall outcome. These voids of responsibility may be highly consequential. System weaknesses may develop because of decisions and non-decisions that accumulate over long periods of time; because responsibility and authority for coordinating action to correct structural deficiencies is diffused, confused or absent and because a profusion of localised practices and components erode the integrity and functioning of the system as a whole. Eventually, catastrophe may erupt.
In his more recent work, Thompson has noted that ‘when many hands are involved, individuals who may bear some responsibility for harm are less likely to see what they do and less likely to be held responsible by others. The profusion of agents obscures the location of agency’.8 Understood in this way, the problem of many hands is not simply a restatement of the well-known economists’ problem of misaligned incentives between the multiple actors in a system (common in complex and diffuse fields such as healthcare). Instead, its emphasis is on the important tensions that may arise between individual and collective responsibility for adverse outcomes and how responsibility can be distributed in areas as diverse as climate change,9 engineering defects in large building projects,10 the financial crisis of the late 2000s, and the Deepwater Horizon disaster.8
Healthcare, characterised by autonomous, highly distributed and heterogeneous yet interdependent actors, is a paradigmatic example of the problem of many hands. Its actors include healthcare organisations and healthcare workers and their professional bodies and governmental agencies and also manufacturers and suppliers of drugs and equipment, charities and foundations, patient advocacy groups, political representatives and political parties, insurers and payers, regulators and accreditors, professional associations, the legal system, information technology vendors and many, many others. Such naturally forming (rather than purposefully designed) networks typically find it difficult to coordinate their interactions,11 not least because the various actors may be rivalrous and lack shared commitments. They may experience intense conflicts over the nature of the problems they face, the goals to be met, the means by which these goals will be achieved and who will take responsibility for delivering on those goals and be accountable if they are not met.12 Only rarely can a single individual or entity be held responsible for failures at level of the collective. The overall effect is that the kind of system-level action needed to manage risk effectively is frustrated.
As is frequently observed, the healthcare example stands in vivid contrast to many sectors that have become safer over the time, such as the oil, building, nuclear and aviation industries. These sectors have typically found ways to confront and manage these challenges, typically through developing mechanisms of coordination, harmonisation and incentives for cooperation on safety that are robust to imperatives for competition. Such industries focus huge efforts at the level of the sector, agreeing on national or global standards and measures, harmonising technology, and using multiple techniques ranging from peer learning communities through to international standards and legal requirements.13 ,14 None of this prevents local learning in individual organisations; indeed, it may support and facilitate it. For instance, the existence of international standards on vehicle safety does not stop individual car manufacturers from continuing to innovate in the design of their automobiles. Yet the actors in healthcare systems have failed to organise themselves in this way. With some important exceptions focused on a specific problem—such as, for example, the work of the International Standardization Organization on Anaesthetic and Respiratory Equipment—they do not function as a collective whole or sector-like entity, but instead act as a collection of atomised individuals, responsible mainly for themselves and not the system as a whole.
These failures to act at a sector level in healthcare have persisted even as efforts to hold individual organisations (particularly providers) have increased markedly. But demands for organisational accountability do not by themselves solve the problem of many hands: they may, instead, paradoxically exacerbate it by eroding the recognition that some problems need to be solved at a scale greater than the individual hospital or practice. The failure of scale alone makes it difficult for single organisations to address many safety issues effectively. For instance, the expertise to investigate and address many safety problems is so specialised and multidisciplinary that few organisations will have the skills or resources need to conduct a robust investigation or design interventions that will mitigate risks. Local investigations of safety incidents are, accordingly, often conducted in ways that appear non-independent and amateurish in comparison with other high-risk industries that benefit from sector-wide expertise. In aviation, for instance, dedicated and highly skilled Commercial Aviation Safety Teams conduct sector-wide analyses of the major causes of preventable deaths that can inform the design of sector-wide solutions. In contrast, healthcare has clinicians and administrators conducting investigations, often with limited training in safety and often recommending weak interventions such as ‘re-education’ as the risk reduction strategy.15 The problem is compounded by the failure in healthcare to share the learning from investigations: such learning often remains confined within the organisation where it occurred, the generalisable lessons neither generated nor implemented.16
Charging individual organisations with the responsibility for patient safety challenges may in fact reproduce the exact same problem seen when individuals are blamed for systems defects: organisations themselves are just one element of a much wider context, and cannot, acting individually, resolve many of the deep structural issues at the heart of the safety problem. Simply put, many safety challenges defy the capacity of any single healthcare organisation to resolve. Controlling the supply side of medical devices, for example, is not within the gift of any hospital. Yet these devices consistently violate the principles of human factors recognised as fundamental to safety in other industries, and they rarely facilitate the creation of the kinds of integrated systems best suited to serve the interests of patients and practitioners. Instead, hospitals have to assemble, painfully, multiple items of equipment and devices that arrive piecemeal from multiple sources that do not coordinate their activities. Cobbled together, highly fallible systems that pose risks to patients persist in part because the kinds of imperatives and structures to support system-wide standards for usability and interoperability are lacking. As a result, healthcare overly relies on the heroic efforts of clinicians to ensure safety rather than the design of safe systems. The problem of many hands is deeply implicated: there is no mechanism for coordinating the actors and their incentives to ensure they produce a safe, integrated supply chain, and no single party to hold accountable when it fails.
Failures of coordination and integration in healthcare also contribute to the current arms race of performance and quality metrics, the confusion and distraction it creates and the diversion of resources into improvement efforts that are often ineffective and inefficient.17 Despite the massive burden of quality metrics, no valid mechanism exists to monitor how many patients die or are harmed as a result of substandard care, leaving the field open to widely varying and sometimes lurid claims18; yet again the locus of responsibility for solving this problem remains obscure. Thus, the weaknesses of the collective obstruct the achievement of individual actors’ goals, even though all involved support those goals in principle.
The problem of many hands also means that even when individual actors are seeking to secure improvements, the multiplicity of actors and their failure to act in a coordinated way may increase the risks in the system. The recent proliferation of local QI projects, though well intentioned, perversely adds to the difficulties. Many projects rightly target poorly designed or functioning healthcare processes. QI projects seeking to address process defects have delivered important successes and will always be a critical element of organisations’ efforts to improve quality. But they are not a straightforward solution to safety.
First, local projects are prone to uniqueness bias (the often flawed assumption that every situation is singular and requires a different solution) and may wastefully start from scratch every time. A given hospital is rarely the first to have a problem with patients with delayed recognition and management of septic patients, overuse of urinary catheters, communication and handoff errors, suboptimal use of the surgical checklist or any number of other common targets for QI. Yet, because of the problem of many hands, system-level curation of safety measures, standards and solutions are lacking; it remains difficult even to find out how to assess the problem or what another organisation has done that worked or did not work, and academic publishing norms remain ill-suited to this task. The result is that local teams waste time and energy in inventing solutions from scratch rather than customising solutions known to work. Second, because the skills and resources needed for safe design are rare and often unavailable to local QI teams, small ‘patches’ are often used to fix safety issues, resulting in a corresponding failure to tackle the bigger, deeper problems.
Third, and perhaps most consequentially for safety, QI projects undertaken locally have a troubling tendency to create locally specific work processes, routines and tasks that only apply in their context of origin and in so doing create new risks at the level of the healthcare collective. One basic problem, well-known in safety science, is that too many localised processes contribute to unwarranted variability across health systems. Locally specific procedures and failures to harmonise safety procedures at the system-level create the conditions for tragic outcomes, as occurred in the case of the last patient to die of inadvertent administration of vincristine by the intrathecal route in the UK.19 The implementation of electronic health records is increasingly making visible the underlying variability in clinical processes and practices across even units in the same hospital.20 Some of this arises from variability in individual clinician preferences (eg, in relation to dosing for vasopressors and electrolytes) and requires resolution to be reached through multidisciplinary dialogue and engagement with the scientific evidence. Much more variability arises, however, from historically reinforced patterns and norms that sustain poorly functioning processes rather than principled, purposeful, multi-stakeholder design.21
The paradox is that local QI projects may, unless well coordinated, reproduce or exacerbate the unwanted effects of highly variable processes and procedures by making improvements in local settings that undermine the safety of the system as a whole. Thus, for example, the hospital that seeks to improve safety by using red labelling for syringes containing muscle relaxants may well be able to demonstrate better local risk control in their own operating rooms, but introduce new system-level risks because doctors moving from this hospital to the next may depend on the visual cue and make errors if it is not there (or if a different colour is used). The chaos surrounding colour-coding of wristbands, with the same colours signifying different meanings in different contexts22 similarly introduces risks at the level of the system that may occur at the same time as QI evidence may suggest improvement at the level of a single organisation.
We have reached the limits of treating patient safety as something that can be solved provider by provider or through individual heroism. QI capacity will always retain an invaluable and indispensable role in organisations, but we need to acknowledge the risk that multiple ill-coordinated small-scale QI projects, substituting for sector-wide solutions, may degrade rather than improve the ability to achieve system-level change.
Arriving at a diagnosis of the many hands problem helps in clarifying the nature of the pathology, but it does not by itself suggest a therapy. Thompson himself perhaps is better at characterising the problem than solving it: his proposal, in the context of public administration systems, is that it is necessary to be able to identify individuals who knowingly and freely contribute to poor outcomes. Though it has some potential for some kinds of issues, this kind of individualist approach is likely to have many limitations (both practical and ethical) in the context of patient safety, at least in its current stage of development. What is clear is that healthcare now needs to assume collective responsibility. It needs to tackle its safety problems as a sector through coordinated, interdependent and integrated action and collective, consensual solutions. The structures through which this may be achieved will, however, require much debate.
It is likely that much of what is needed is not coercive intervention by central governments or regulators, though that will play a role where needed: for a select group of challenges, perhaps especially those involving manufacturers and suppliers, something akin to a system integrator is needed,23 one with legally backed authority. But a top-down, centrally imposed dystopia of standardisation and enforcement may not be the answer to many challenges that arise from the problem of many hands. Instead, much is likely to be achieved by making those in healthcare accountable to each other through more horizontal, cooperative structures.24 ,25 ,26 Such structures can accommodate professional groupings who can work together to agree on solutions that are satisfying, workable, informed by professional values and clinical expertise, capable of being customised for specific situations and enforceable through peer sanctions. Much more thought needs to be given to finding the balance between global standards and local innovation, so that one facilitates the other; the key is that the kinds of strategy chosen should be thoughtfully selected and well-fitted to risks and contexts.
Recognising the problem of many hands may be the first step in fixing it. We call for attention to be given urgently to identify the new structures and new accountabilities for a collective, system-level approach for protecting patients.
References
Footnotes
Contributors MD-W conceived the idea for the paper and prepared a first draft. PJP made substantial contributions to revising this draft and providing additional insights. Both authors approved the final draft.
Funding Wellcome Trust Senior Investigator Award for Mary Dixon-Woods (WT097899).
Competing interests MD-W is Deputy Editor-in-Chief of BMJ Quality and Safety.
Provenance and peer review Not commissioned; internally peer reviewed.