Article Text

Download PDFPDF

Personal accountability in healthcare: searching for the right balance
  1. Robert M Wachter
  1. Department of Medicine, University of California, San Francisco, California, USA
  1. Correspondence to Dr Robert M Wachter, Department of Medicine, University of California, San Francisco, Room M-994, 505 Parnassus Avenue, San Francisco, CA 94143-0120, USA; bobw{at}medicine.ucsf.edu

Abstract

While the patient safety field has emphasised ‘systems thinking’ as its central theme, experts have pointed to the need to balance this ‘no blame’ approach with the need for accountability in certain circumstances, such as failure to heed reasonable safety standards. Our growing appreciation of the importance of accountability raises several new questions, including the relative roles of personal versus institutional accountability, and the degree to which personal accountability should be enforced by outside parties (such as peers, patients, healthcare systems or regulators) versus professionals themselves (‘professionalism’). Identifying the appropriate locus for accountability is likely to be highly influenced by the structure and culture of the healthcare system; thus, answers in the UK will undoubtedly be different from those in the USA. Ultimately, a robust approach to patient safety will balance ‘no blame’ with accountability, and will also parse the correct target for accountability in a way that maximises fairness and effectiveness.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Of the many vexing problems in patient safety, none are trickier than balancing the ‘no blame’ systems approach to medical errors with the need for accountability—at the individual, managerial and organisational levels. Informed by the pioneering work of Professor James Reason,1 ,2 the patient safety field embraced the former approach in its early years—both because it is largely correct (most errors are, in fact, committed by good people trying their very best) and because it was politically expedient. In the USA particularly, where mentioning ‘medical errors’ to a doctor immediately evokes near-Pavlovian thoughts of being named in a malpractice suit, the ‘no blame’ approach represented the only hope to engage physicians in safety efforts.

While ‘systems thinking’ has led to many improvements in safety (eg, computerised order entry, bar coding, standardisation and simplification of processes, and improved equipment design), it tells an incomplete story. Specifically, a ‘no blame’ approach seems apt for some errors but not others; the latter category includes errors committed by incompetent, intoxicated or habitually careless clinicians, or by those unwilling to follow reasonable safety rules and standards.

This recognition has led to efforts over the past few years to balance ‘no blame’ and accountability. This rebalancing gained momentum as both the US and UK healthcare systems enacted policies to promote institutional, if not individual, accountability for performance. In the USA, such policies include more aggressive hospital accreditation requirements by the Joint Commission, as well as public reporting of safety hazards, ‘no pay for errors’ initiatives, and ‘Value-Based Purchasing’ by Medicare.3 ,4  In the UK, accountability has been promoted by incentive-based payments for general practitioners and high-profile investigations by the Care Quality Commission on reported safety lapses in individual hospitals.5 ,6

This paper highlights the tension between ‘no blame’ and accountability. It reflects on the value and limitations of the ‘Just Culture’ paradigm, and explores the role of personal versus organisational accountability.

A representative case

Scott Torrence, a 36-year-old insurance broker, was struck in the head while going up for a rebound during his weekend basketball game. Over the next few hours, a mild headache escalated into a thunderclap, and he became lethargic and vertiginous. His girlfriend called an ambulance to take him to the emergency room in his local rural hospital, which lacked a CAT or MRI scanner. The emergency room physician, Dr Jane Benamy, worried about brain bleeding, called neurologist Dr Roy Jones at the regional referral hospital (a few hundred miles away) requesting that Torrence be transferred. Jones refused, reassuring Benamy that the case sounded like ‘benign positional vertigo’. Benamy was worried, but had no recourse. She sent Torrence home with medications for vertigo and headache. The next morning, Benamy re-evaluated Torrence, and he was markedly worse, with more headache, more vertigo, and now vomiting and photophobia (bright lights hurt his eyes). She called neurologist Jones again, who again refused the request for transfer. Completely frustrated, she hospitalised Torrence for intravenous pain medications and close observation. The next day, the patient was even worse. Literally begging, Benamy found another physician (an internist named Soloway) at Regional Medical Center to accept the transfer, and Torrence was sent there by air ambulance. The CAT scan at Regional was read as unrevealing (in retrospect, a subtle but crucial abnormality was overlooked), and Soloway managed Torrence's symptoms with more pain medicines and sedation. Overnight, however, the patient deteriorated even further—‘awake, moaning, yelling’, according to the nursing notes—and needed to be physically restrained. Soloway called the neurologist, Dr Jones, at home, who told him that he ‘was familiar with the case and… the non-focal neurological exam and the normal CAT scan made urgent clinical problems unlikely’. He went on to say that he would ‘evaluate the patient the next morning’. But by the next morning, Torrence was dead. An autopsy revealed that the head trauma had torn a small cerebellar artery, which led to a cerebellar stroke (an area of the brain poorly imaged by CAT scan). Ultimately, the stroke caused enough swelling to trigger brainstem herniation—extrusion of the brain through one of the holes in the base of the skull, like toothpaste squeezing through a tube. This cascade of falling dominoes could have been stopped at any stage, but that would have required the expert neurologist to see the patient, recognise the signs of the cerebellar artery dissection, take a closer look at the CAT scan, and order an MRI.7

While one could envision system improvements that might have helped prevent this tragic outcome, Dr Jones's refusal to come to the hospital to see a rapidly deteriorating patient seems like a personal failing. Of course, doctors are human (there was a reason that the Institute of Medicine's seminal report on patient safety was called To err is human8), and thus, a healthcare system that relies on human perfection is destined to disappoint. Cases like this one illustrate that challenging lines must be drawn, lines that distinguish expected human frailties from levels of performance that fall below professional standards. The latter circumstances require an accountability approach. As Dr Lucian Leape, widely considered the father of the patient safety movement in the USA, once told me: There is no accountability. When we identify doctors who harm patients, we need to try to be compassionate and help them. But in the end, if they are a danger to patients, they shouldn't be caring for them. A fundamental principle has to be the development and then the enforcement of procedures and standards… When a doctor doesn't follow them, something has to happen. Today, nothing does, and you have a vicious cycle in which people have no real incentive to follow the rules because they know there are no consequences if they don't. So there are bad doctors and bad nurses, but the fact that we tolerate them is just another systems problem.9

In the USA, the hypertrophied malpractice system partly arose through political happenstance (lawyers represent a powerful political force), but it also represents a lack of public trust in the medical profession's ability to enforce its own accountabilities. This is a damning indictment. One of the core attributes of professions is that, in exchange for unique powers and privileges, the public assumes that the profession will regulate itself.

For a variety of reasons, medicine does poorly in this regard. Unlike attorneys, who are trained to challenge others, physicians are socialised to be collegial and non-confrontational. Moreover, because medicine is so specialised, doctors asked to review the performance of peers are likely to be from the same small community of specialists, raising the possibility that they will either be colleagues or competitors. There is strong evidence of physicians’ discomfort with peer review: a 2010 survey found that more than two-thirds of physicians believe it is their responsibility to report an impaired or incompetent colleague to the appropriate authorities. However, when physicians could name just such a colleague, one-third confessed that they failed to report him or her.10

The ‘just culture’ model

It is challenging to draw lines between the expected flaws of mortals and those transgressions that merit an accountability approach. Interestingly, although James Reason's work on human error is often cited as the driving force behind the ‘no blame’ approach to medical mistakes, Reason was acutely aware of the need for accountability. In his classic book, Managing the risks of organisational accidents, Reason described the need to deal with clinicians who habitually choose to ignore important safety rules: Seeing them get away with it on a daily basis does little for morale or for the credibility of the disciplinary system. Watching them getting their ‘come-uppance’ is not only satisfying, it also serves to reinforce where the boundaries of acceptable behavior lie… Justice works two ways. Severe sanctions for the few can protect the innocence of the many.

Reason then introduced the concept of the ‘Just Culture’: A ‘no-blame’ culture is neither feasible nor desirable. A small proportion of human unsafe acts are egregious… and warrant sanctions, severe ones in some cases. A blanket amnesty on all unsafe acts would lack credibility in the eyes of the workforce. More importantly, it would be seen to oppose natural justice. What is needed is a just culture, an atmosphere of trust in which people are encouraged, even rewarded, for providing essential safety-related information—but in which they are also clear about where the line must be drawn between acceptable and unacceptable behavior.2

David Marx, a US attorney and engineer, has popularised the Just Culture concept by developing a model that distinguishes between ‘human error’ (an inadvertent act, such as a ‘slip’ or ‘mistake’), ‘at-risk behaviour’ (taking shortcuts that the caregiver does not perceive as risky—the equivalent of rolling through a stop sign at a quiet intersection), and ‘reckless behaviour’.11 Only the latter category, defined as ‘acting in conscious disregard of substantial and unjustifiable risk’, is blameworthy. Other versions of the Just Culture algorithm, including an ‘incident decision tree’ produced by the UK's National Patient Safety Agency, are available.12 Another model, developed by US safety experts, Allan Frankel and Michael Leonard,13 guides users to reflect on several questions before deciding whether punishment is warranted:

  • Was the individual knowingly impaired? (If yes, punishment may be warranted.)

  • Did the individual consciously decide to engage in an unsafe act? (If yes, punishment may be warranted.)

  • Did the caregiver make a mistake that individuals of similar experience and training would be likely to make under the same circumstances (‘substitution test’)? (If no, punishment may be warranted.)

  • Does the individual have a history of unsafe acts? (If yes, punishment may be warranted.)

While all these models are helpful to leaders who are trying to identify acts that merit an accountability approach, many hospitals (including those in the USA that have engaged pricey consultants to deliver Just Culture training) have continued to shy away from disciplinary approaches, particularly when the culprits are physicians. Here, an important difference between the healthcare systems of the USA and UK influences this response.

Most US physicians are self-employed, not working for hospitals or large healthcare systems (although there is a trend toward more employment, as payments for physicians fall and pressure grows to deliver integrated, coordinated care). This means that the job of hospital leaders, historically, has been to attract physicians to their facility, since the physicians bring their patients (and associated revenue) with them. Because doctors could threaten to shift hospitals if they were unhappy, few hospitals were enthusiastic about setting and enforcing standards of behaviour and practice. The result was a tradition of non-accountability for physicians, even in hospitals that have disciplined nurses (who are employed by the institution) for ‘reckless behaviour’—clear evidence of a double standard.

In light of this, Pronovost and I have argued the need to enforce uniform standards of accountability for all healthcare providers, including physicians. In a 2009 paper, entitled ‘Balancing “no blame” with accountability in patient safety’, we used the example of hand hygiene to make our case.14 We recommended that an accountability approach be considered when all of the following conditions are met:

  • The patient safety problem being addressed is important.

  • The evidence is strong that adherence to the practice decreases the chances of harm.

  • Clinicians have been educated about the practice and the evidence.

  • The system has been modified to make it easy to adhere to the practice, and unanticipated consequences have been addressed.

  • Physicians understand the behaviours for which they will be held accountable.

  • A fair and transparent auditing system has been developed.

Once these conditions are met, it is vital that transgressions are viewed through an accountability rather than a ‘no blame’ lens, and that appropriate discipline (everything from stern rebukes to fines and suspensions) be meted out. In our New England Journal article, we explained why this was so important: Part of the reason we must do this is that if we do not, other stakeholders, such as regulators and state legislatures, are likely to judge the reflexive invocation of the ‘no blame’ approach as an example of guild behavior—of the medical profession circling its wagons to avoid confronting harsh realities, rather than as a thoughtful strategy for attacking the root causes of most errors. With that as their conclusion, they will be predisposed to further intrude on the practice of medicine, using the blunt and often politicized sticks of the legal, regulatory, and payment systems.14

Personal versus institutional accountability

Interestingly, at this point, most of the pressures for accountability (at least in the USA) fall on hospitals and healthcare organisations rather than individual physicians. For example, Medicare's Value-Based Purchasing programme, launching in late 2012, penalises hospitals, but not individual clinicians, for poor performance on measures of safety, quality and patient satisfaction.4 Because of this, most of the pressure today for individual accountability is not coming from outside regulators, payers or accreditors, but rather from hospitals that are being held accountable for their performance and are pushing those accountabilities down toward clinical units and even individual clinicians.

Nonetheless, independent of the policy levers used to promote accountability, it is worth reflecting on yet another tension: not between ‘no blame’ and accountability, but between individual versus collective accountability. In a 2011 article, Bell and colleagues emphasise the importance of collective accountability—accountability at the level of the individual clinician, the healthcare team, and the institution.15 This is an important distinction, because one can easily push the concept of individual accountability too far down the organisational chain. Safety expert Dr Charles Denham recounts the story of Jeannette Ives-Erickson, a nursing leader at a prominent US academic medical centre, whose habit was to call a nurse into her office after a bad error. She asked one simple question: ‘Did you do this on purpose?’ If the answer was no, then Ms Ives-Erickson would say, ‘Well then it is my fault… Errors stem from system flaws, and I am responsible for creating safe systems.’ Denham points out that it is ‘easy to automatically fall into a name-blame-shame cycle, citing violated policies, and ignore the laws of human performance and our responsibility as leaders’.16 The story nicely illustrates the challenges facing leaders in healthcare delivery systems, who must determine whether to push accountability down to individual clinicians while, at the same time, accepting their own responsibility to construct and maintain safe systems of care.

Conclusion

The patient safety field is at a crossroads as it grapples with a variety of fundamental but challenging questions. In the early years, we embraced the notion of ‘no blame’ and systems thinking as the cure-all for safety—it was novel (for healthcare, at least), had yielded strikingly positive results in other industries such as commercial aviation and nuclear power, and was politically astute, since it encouraged clinicians (particularly physicians) to participate in the safety enterprise.

A decade later, our thinking has become more nuanced. We now recognise that ‘no blame’ is the appropriate response for many errors, but not all. With this recognition have come increasingly powerful efforts, including policy changes, to promote accountability, which have exposed a new tension: whether that accountability is best targeted at individual clinicians or the organisational leaders who establish the systems and enforce the policies.

Like most complex questions in life, this one has no single easy answer. In calibrating ‘no blame’ versus accountability, and then further determining the locus of accountability, we should aim for the approach that best answers a series of crucial questions:

  • Do patients and their representatives feel that professionals—both clinicians and leaders—have attacked medical errors with the seriousness they deserve?

  • Do individuals in the systems—both clinicians and leaders—feel that they are being treated fairly?

  • Most importantly, have we made care safer?

The 19th century German philosopher, Arthur Schopenhauer, once said, ‘Opinion is like a pendulum and obeys the same law. If it goes past the centre of gravity on one side, it must go a like distance on the other; and it is only after a certain time that it finds the true point at which it can remain at rest.’17 In the first few years of the patient safety movement, the pendulum swung too far toward systems. It is now swinging back toward individual and collective accountability. The ultimate success of our efforts to prevent harm will depend on ensuring that the pendulum comes to an optimal resting point.

References

View Abstract

Footnotes

  • Funding Prepared for the UK's Annual Patient Safety Congress, May 2012. Supported by The Health Foundation.

  • Competing interests None.

  • Provenance and peer review Not commissioned; externally peer reviewed.