The opening keynote session of the 16th Annual National Patient Safety Foundation Patient Safety Congress, held 14–16 May 2014, featured a debate addressing the merits and challenges of accountability with respect to key issues in patient safety. The specific resolution debated was: Certain safety practices should be inviolable, and transgressions should result in penalties, potentially including fines, suspensions, and firing. The themes discussed in the debate are issues that healthcare professionals and leaders commonly struggle with in their day-to-day work. How do we draw a line between systems problems and personal failings? When should clinicians and staff be penalised for failing to follow a known safety protocol? The majority of those who listened to the live debate agreed that it is time to begin holding health professionals accountable when they wilfully or repeatedly violate policies or protocols put in place by their institutions to protect the safety of patients. This article summarises the debate as well as the questions and discussion generated by each side. A video of the original debate can be found at http://bit.ly/Npsf_debate.
- Patient safety
- Safety culture
Statistics from Altmetric.com
Patient safety work has been guided by the idea that errors occur not because people are bad, but because systems are flawed, with gaps that allow errors to occur. Yet nearly 15 years after the Institute of Medicine report1 that brought to light the impact of medical errors, progress has been slower than many would like. Is it time to reassess the practice of ‘no blame’ when it comes to patient safety? If so, by what measure and for what transgressions do we hold people accountable? How do we impose appropriate penalties?
The question of whether certain patient safety practices should be considered inviolable, with violators subject to penalties potentially including fines, suspensions or loss of their jobs, was the topic of a debate at the 16th Annual National Patient Safety Foundation Patient Safety Congress, held in May 2014. GSM, chief clinical officer, Partners HealthCare, defended the focus on systems as the best use of resources to advance patient safety. RMW, chief of the Division of Hospital Medicine, University of California, San Francisco, argued that leaders must address individuals who choose to disregard established safety rules. Each took his respective position for the purposes of debate only.
The themes discussed in this debate are issues that healthcare professionals and leaders commonly struggle with in their day-to-day work. How do we differentiate between systems problems and personal failings? When should clinicians and staff be held accountable for not following a safety protocol? What do patients and families think when they are the victims of errors for which no one is to blame but the system?
As these safety leaders articulated their arguments, it became clear that there are many grey areas in the spectrum between accountability and a continued focus on systems and processes. Health professionals work under stressful and complex conditions. Resources are being stretched. Disregarding a safety rule may be a necessary workaround in an emergency. Punishing people for such transgressions will demoralise the workforce and add fear, potentially increasing the risk of error. Choosing to disregard a safety rule repeatedly, however, may be considered wilful misconduct. Not punishing that behaviour may demoralise those who consistently follow the rules, or promote further rule violations.
TKG, president of the National Patient Safety Foundation and of the NPSF Lucian Leape Institute, moderated the debate. Attendees of the Congress—nurses, physicians, health leaders, researchers and patient advocates—were invited to submit questions and were polled about their thoughts on the issue throughout the session. Before a word was spoken, attendees were already lined up on the side calling for accountability (see figure 1).
This article summarises the debate as well as the questions and discussion generated by each side.
The Resolution under debate: Certain safety practices should be inviolable, and transgressions should result in penalties, potentially including fines, suspensions, and firing.
Dr Robert Wachter's opening remarks
I first became interested in patient safety in 1994, when I read Lucian Leape's paper ‘Error in medicine’.2 It introduced to me the concept that errors are mostly manifestations of dysfunctional systems, and most errors are committed by good, competent, caring people trying to do the right thing. That actually made a good deal of sense to me. I went on to read some of the foundational works in patient safety, notably a book by Jim Reason.3
Reason studied how errors happen in complex organisations; he studied space shuttle crashes, train derailments and Three Mile Island. When I first read about the Swiss Cheese Model,3 it was an epiphany to me—this idea that in complex organisations the root cause of most errors is that the protections the organisation has built are leaky, and on a terrible day, all the holes align and the error makes it through.
I work at UCSF Medical Center, and my colleagues—doctors, nurses, pharmacists—are spectacularly good. They are careful, they care deeply and they work hard—and yet we have errors frequently. So reading Reason's work made very real to me this idea that errors can't be all about bad people. It's mostly about good people working in dysfunctional systems.
The other reason the ‘No blame’ and ‘It's the system’ approach was so important is because in the early years of patient safety, in the late 1990s, it was vital to get doctors engaged in this work. At that time, if you said to a doctor, “We're going to talk about medical mistakes,” all they could think about was malpractice, and from then on, they were not listening. So the idea of focusing on no blame, it's the system, was an incredibly important point—both because it's mostly correct and because it was a politically astute way of getting started.
Several years ago, however, I began to recognise that we were invoking the idea of ‘No blame’ and ‘It's the system’ in almost every situation of harm or risky behaviour. One example is in the area of hand hygiene. In visits to many hospitals, I began to ask about hand hygiene rates. At those with less-than-stellar rates, I'd ask, “What are you trying to do about that?” And the response would be, “Well, we're trying to improve the system.” But as I looked around, it seemed that the system was pretty good—with gel dispensers everywhere, posters of clinical leaders washing their hands and computer screen savers that reinforced the message. I began to realise that the nature of the problem is a lack of accountability.4
A few years ago, I had the opportunity of giving the James Reason Lecture in the UK. As I was preparing the talk, I went back and re-read the book Managing the Risks of Organisational Accidents, Reason's foundational work from 1997, and I asked myself if the founder of the Swiss Cheese Model ever said anything about how to deal with transgressions. And in fact he had. Reason wrote,
A no-blame culture is neither feasible nor desirable. A small proportion of human unsafe acts are egregious and warrant sanctions….severe ones in some cases. A blanket amnesty on all unsafe acts would lack credibility in the eyes of the workforce. More importantly, it would be seen to oppose natural justice. What is needed is a just culture, an atmosphere of trust in which people are encouraged, even rewarded, for providing essential safety-related information—but in which they are also clear about where the line must be drawn between acceptable and unacceptable behavior.3
When I asked hospital leaders what they do about people who refuse to follow reasonable safety rules, the reply was often “We have a ‘just culture’.” They would show me Reason's algorithm3 or David Marx's algorithm5 and it made pretty clear what you do in different situations. But, I asked, what if the head of CT surgery or neurosurgery says, “I don't like the idea of a time out, so I don't do that”? That question was answered with silence.
In fact, many organisations are not doing very much about that kind of a situation because it's really hard to address. But by not addressing it, we all become enablers of allowing people, and in some cases only certain people, to break key safety rules. The message it sends to an organisation and the entire field is quite profound.
Henry Kissinger once said, “Weakness is provocative,”6 and he was right. No blame is usually the correct response to most errors and unsafe acts. But by invoking it all the time, sometimes inappropriately, I think we are harming more patients and, quite likely, we are losing the trust and credibility of those who look to us for leadership in patient safety. Leaders who do nothing about clinicians who fail to clean hands, clinicians who bypass time-outs, or clinicians who engage in disruptive behaviour are being provocative—and it is time we stopped.
Dr Gregg Meyer's opening remarks
The resolution should be denied because it could very well be a danger to patient safety. The demand for accountability ignores our humanity. It runs counter to what we know from safety science. It's impractical, it does not work, it drives the wrong behaviours and it's a distraction.
To the first point: it ignores our humanity. We must recognise our humanity when it comes to making mistakes, to forgetting to do that hand hygiene, to skipping through that one part of the checklist on a particular morning. It is part of our human nature to err. We make mistakes, and when we are fearful, when we are worried that somebody is watching us and is going to punish us, we let that fear push us further over on the performance anxiety curve, and we make even more mistakes.
The proposition also ignores safety science. Advancing patient safety requires continuous learning. Being open about our failings is the first and most important step to getting that process going. Blame will create distrust and it will slow down the learning of the safety movement.
H.L. Mencken said, “For every complex problem, there's an answer that is clear, simple, and wrong”, and accountability is clear and simple and wrong. Patient safety is complex, and we need to treat it as such.
The proposition would not work. We've had decades of torts that were based on blame. Has the medical malpractice system made us safer over the last five or six decades?
We have evidence of how wrong things can go if we assign blame. We all celebrate our aviation safety reporting system. It's been emulated around the world, and the CALLBACK newsletter is something we all aspire to in terms of the way they share openly about those hazardous events. New Zealand was one of the countries that emulated it. They set up a system that was working well. But in 1991 they decided to divulge the identity of one of the reporters.7 In so doing, they shut the system down. No one reported anymore. Everyone was afraid. I would argue that the value of divulging the name of that pilot was not worth the 5 years it took to restart their safety reporting system.
The proposition is also impractical. Whenever we create such rules, there are three ways to approach them. The first way is to change the underlying process, to do it right, to make doing the right thing the easy thing. That takes time, energy and resources. But there are two other ways that people ensure they are complying with whatever rules have been set. The first is to game it, to check the box—perhaps without the seriousness of effort expected. The second is to create unsustainable efforts, to throw resources at the problem that we cannot keep there for the long run. Both of those are shortcuts, and driving to accountability will make us take those shortcuts.
It is also important to ask, “Where does it stop?” The accountabilities that Dr Wachter has proposed, such as hand hygiene and the preoperative checklist, seem quite defensible places where we should start with accountability, but what comes next and where does it stop?
For example, at my institution, a patient who had undergone surgery had a postoperative event that led to a significant bleed. The requirement for any blood transfusion was that two nurses had to sign off on the unit of blood with their full names and titles. When we had an accreditor come in for review, they found that we violated that policy 40 times in one particular case. There were just initials on the units. That was deliberate and it was repeated. The patient survived because those 40 times were for a single patient who was having a major bleeding event.
If those nurses were going to be held accountable for the fact that they did not sign their full names, would they have been as responsive in getting those units of blood into that patient as quickly as they needed to? There are going to be emergent exceptions to any of our rules, but where do you draw the line?
We need to think, and the thoughtful recognise that punishment is not the route to improve safety. It drives the wrong behaviours. Rules-based behaviours are one of the biggest sources of safety events in recent analysis, and what are we driving toward with this issue of accountability?8
We need to keep focusing on harm, but it is easier to punish than it is to put resources into making sure that we create a less harmful environment for our patients. When times are tight, reverting to punishment as a quick, expedient means of ensuring that we are moving forward on patient safety has a lot of attraction for our colleagues because it is a lot easier than educating, building systems and removing barriers.
Finally, the proposition is a distraction from the real barriers to patient safety. The real barriers aren't bad people. They are production pressure and they are the lack of systems. We need to spend our time on improving those rather than on counting checklists, policing and being Big Brother.
Lucian Leape said that the single greatest impediment to error prevention is that “we punish people for making mistakes”.9 It was right when he said it, and it is right now.
Questions and discussion
A poll of the Congress attendees after the opening arguments showed a slight move in opinion toward Dr Meyer's position (see figure 2). Yet the vast majority continued to agree with Dr Wachter's argument. In questions from the attendees, as well as direct point–counterpoint discussion between the debaters, the grey areas of each side emerged.
Personal versus organisational accountability
Can the resistance to accountability be attributed to the associated need to admit personal failure and commit to changing behaviour? How do we explain to family members when their loved one suffers because of a safety lapse and no one is held accountable?
Dr Meyer agreed that people should be held accountable in certain clearly defined instances, but that those instances are actually quite rare. Criminal activity should of course be treated as criminal. The bigger issue, he argued, is to consider how we can best advance patient safety.
The patient safety field is still quite young, and a focus on accountability will be a distraction from the important work of continuing to improve systems. He emphasised that every safety lapse is a learning opportunity—even, or perhaps especially, those that result in patient harm. A health system that has harmed a patient is obligated to make sure that everyone in the organisation knows that harm occurred and that there are behaviours and norms that can help prevent harm from happening in the future. Accountability lies at the organisational level.
Dr Wachter emphasised the gradations between criminal activity and innocent mistakes. By not holding people accountable for many of the events that fall in between those two extremes, leaders do a disservice not only to patients, but also to those colleagues who follow the rules consistently. How can we tell families that no one can be held accountable for certain kinds of harm? A no blame culture may be easier for leaders, because holding people accountable is hard, but our first concern should be patients and their families.
How do we differentiate between mistakes and wilful misconduct and between patterns of behaviour and errors or omissions committed under complex conditions? Wilful misconduct, Dr Wachter explained, is when someone chooses to ignore a safety rule or standard, despite knowing what the rule is and understanding it. An exception would be doing a workaround for a system that is flawed.
It is important to consider the notion of choice when discussing wilful misconduct or patterns of behaviour. If, for example, an organisation decides to require time-outs for a particular procedure, and they've done good work around culture and leadership and systems, then not doing a time-out is a choice that an individual makes, and a single violation is important, noteworthy, and should be dealt with by a system of accountability.
In other situations, for example hand hygiene, almost everyone forgets to do it periodically. A single lapse does not necessarily mean that the person is wilfully choosing not to follow the rule. The question is: What is the appropriate response to a person who chooses repeatedly to violate a safety rule? Dr Wachter said that for the system to allow that to go unaccounted for is unfair and immoral.
Dr Meyer noted how difficult it is to parse the question of deliberate versus accidental actions. He emphasised the amount of time and resources that would be required to determine whether an action was wilful or whether it was just a mistake, and argued that those resources would be better utilised in developing systems, analysing errors and creating processes to avoid them. We need to be able to understand our frailties, accept them and learn from them, he said.
Outcomes and data
Should patient outcomes be the yardstick by which a patient safety rule would be pegged as inviolable? What of variation in performance? If data and anecdotes can show variation, is zero tolerance for safety violations our moral obligation?
Dr Wachter turned again to the example of hand hygiene to explain that often a particular outcome cannot be traced to a specific transgression. Instead we need to define acts that are associated with better outcomes, and set the expectation of universal compliance.
Dr Meyer said that looking at outcomes is important to ongoing progress in patient safety, and by improving important metrics, such as decreasing methicillin-resistant Staphylococcus aureus (MRSA) rates, we can better engage staff and patients in safety work. Demands for accountability will focus on outliers at one end of the performance spectrum, to the detriment of those at the high-performing end. Instead, he argued for using data to highlight those top performers. Focusing on positive deviants will lift the performance of others over time.
Accountability and safety culture
How does a lack of accountability affect the culture of safety? Should everyone be held accountable—or only physicians?
Dr Meyer said everyone working for an institution is accountable for its culture of safety, but rather than focusing on the sharp end, with frontline staff, we need to collectively focus on culture at the blunt end, with policies and procedures in place to advance safety. Invoking a recent NPSF Lucian Leape Institute report,10 he asked how we can reconcile the notion of restoring joy and meaning in work as a mechanism to move safety forward while at the same time calling for accountability and reviving blame?
Calling that a false dichotomy, Dr Wachter pointed out that seeing one's colleague choose to disregard important rules and go unpunished also saps joy and meaning. Whatever the rules are, they must be enforced equally for everyone, physicians included.
We must balance the effort to create work environments where good people want to be and can do their best work with the needs of patients, families and advocates, who see a failure to take on egregious acts that represent flagrant violations of safety rules.
Closing remarks: the future of patient safety
Ultimately, this debate was about how best to advance the field and continue to make healthcare safer. Dr Wachter said it is the job of leaders to figure out how best to deploy resources. Most organisations are already expending resources to monitor compliance; what is lacking is the feedback loop, letting people know that they are doing something wrong, giving them the opportunity to improve, and holding them accountable if they fail to do so. He suggested that the healthcare industry is capable of invoking these sorts of standards in appropriate ways and when the circumstances are right. He reiterated his belief that invoking no blame had been a good way to start our work in patient safety, but that today, as our field matures, there needs to be a balance of no blame and accountability.
Progress in patient safety has been slower than anyone would like, but Dr Meyer pointed out the need to focus on how far we have come. Bringing blame into the discussion, he said, would decelerate that progress, not move it forward. We still have much systems work to do. Moreover, a move to increase accountability would require additional resources to police policies and processes. A better use of scarce resources in healthcare would be to improve underlying systems, do root cause analyses, and look for factors that contribute to errors and safety lapses.
In conclusion, Dr Gandhi asked attendees to vote for a final time on the resolution, and a decided majority voted in favour of accountability (see figure 3).
This audience of health professionals and patient advocates clearly agreed there must be a move toward increased accountability for patient safety. This debate highlighted the benefits of imposing a system of accountability, as well as the challenges and risks if such a system is not done thoughtfully and with fairness. Going forward, the patient safety field will need to study and learn the best methods for implementing appropriate accountability to drive safety improvements while continuing to build a safety culture and encourage transparency about errors.
Contributors All authors fully contributed to the concept, design, writing and editing of this piece.
Funding The Doctors Company Foundation provided an education grant in support of this event at the NPSF Congress. The Doctors Company Foundation had no involvement in the planning, content, or outcome of the programme or of this text.
Competing interests None.
Provenance and peer review Not commissioned; internally peer reviewed.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.