Article Text
Statistics from Altmetric.com
- Adverse events, epidemiology and detection
- Cognitive biases
- Decision support, computerized
- Diagnostic errors
- Human factors
Large numbers of patients are injured annually by care that is intended to help them, and while many strides have been made in improving our understanding of patient safety considerations, it is not clear that this is translating into improvements in clinical outcomes. We now know that a small number of causes of harm appear to account for most of the injuries in hospitals, specifically hospital-acquired infections, adverse drug events, surgical injuries, deep venous thromboses and pulmonary emboli, falls and pressure ulcers.1 Some types of harm, though, such as harm caused by failure to make a correct diagnosis or harm caused by failure to intervene quickly enough in a decompensating patient, are more elusive and not so readily counted, and thus, the true toll from iatrogenic harm is certainly substantially higher than current estimates suggest. Moreover, many of the safety issues that have the greatest cognitive component may be relatively hard to count. In addition, cognitive safety issues may be even more important outside the hospital than in it because care is so much more fragmented and patients interact with the system much less often, which may increase the impact of errors.
Another factor is the relationship between errors and harm. Although few studies have attempted to quantify this directly, it is clear that a relatively low proportion of errors result in harm—just one in a hundred in one study.2 Errors are frequent, but they vary in their likelihood of causing harm; among harm, it is considered preventable if it associated with an error that likely caused the harm, while other cases involving harm are not preventable given what is known. In the accompanying paper, Patel et al3 make a strong case that attempting to achieve a zero error rate is not feasible in most areas and should not therefore be the target. We agree that achieving zero rates of error in particular is not feasible or desirable in most domains in medicine, certainly where humans and cognition are involved, though for some specific things—for example, by ensuring that the correct medicine goes to the correct patient with bar-coding or by ensuring that a tube of blood is labelled correctly, it may be possible with automation to achieve very low error rates, on the order of 10−6 or −7. In contrast, though, goals of ‘zero harm’ are more plausible targets and have already in several instances been very valuable, even when they initially appeared impossible. One clear example is Pronovost et al's4 work with catheter-related bloodstream infections, in which they showed that by following a checklist of a number of steps rigorous hospitals have been able to nearly eliminate these infections for prolonged durations. We also have some problems with the presentation of the framework described in Patel's3 Figure 1, which suggests that there is a simple progression from violation to near miss to an adverse event, which is not necessarily the case. In addition, the distinction between errors and near misses is not clear from the diagram—the usual interpretation has been that it relates to the potential for harm of the error. We also think that other interventions—such as clinical decision support—may be helpful both in normal routine and in what is labelled the ‘near miss’ zone. The framework is, however helpful, in that in cases where this model holds true it highlights the need for interventions to prevent harm.
As Patel et al's3 paper makes clear, cognition is inherently error-prone and errors will in some cases clearly result in harm. Notably, some types of errors are much more likely to cause harm than others that are more egregious—for example, an eightfold overdose might be plausible, whereas an 800-fold overdose—though it would be lethal—would be much less likely to be carried out. When designed appropriately, information systems can dramatically reduce cognitive burden by making it easier to do the right thing, pointing out potential problems and performing calculations, for example. But often today's information technologies fall far short—by burying key pieces of information in complex screens, failing to make it easy to sort out what things need to be done more urgently than others and what pieces of information may be particularly important. They may, therefore, in some instances inadvertently increase the cognitive burden on already overstretched physicians.
Prevention approaches should thus also be considered through the lens of cognition. Checklists are effective because they simplify, identifying a few key things that must be done in a specific circumstance. Tools to detect patients who are decompensating will likely work best if they make it easy for providers to determine which patient has an issue and what their options are for managing it. More broadly, information technologies that are being designed to improve safety must have attention paid to warning design and human factors issues.5 Improved system design is particularly helpful in reducing cognitive errors because health information technology (HIT) systems and human cognitive process have opposite strengths and limitations, so that they are complementary to each other.
It is also helpful to consider error theory in thinking about the role of cognition, especially because of the risk of cognitive overload with its associated impact on modes of thought.6 The prevention approaches are different for slips—which are errors involving low-level, semiautomatic behaviour and in which forcing functions may be especially helpful—and mistakes—which are errors involving cognition. The problems tend to be much more open-ended, and forcing functions are likely much less often appropriate. Much of the prevention work in patient safety to date has focused much more on slips than mistakes because they are much more constrained and thus easier to handle. From the theoretical perspective, Amalberti's work, noted by Patel et al,3 has perhaps been especially influential. In particular, he has described how workers tend to move inexorably to less safe spaces when under pressure, as is so common in healthcare, and has also made it clear that while there are many lessons to be learnt from aviation for healthcare, we are unlikely to achieve safety performance levels at anything like those seen in commercial aviation of 10−6 or 10−7 any time soon in most areas in healthcare, especially those with high cognitive burden, such as performing urgent cardiac surgery in a critically ill patient, where the safety levels are much more likely to be in the 10−1 level.7
To reduce levels of harm, our systems need substantial redesign. Many of the approaches for doing this will depend directly on approaches that reduce cognitive burden in specific situations, such as protocols and checklists in managing a cardiac arrest, or preparing for surgery. Others will be less directional and will involve, for example, making tools available that make it possible to readily access key information or highlight important potential problems. The success of decision support often depends not on the underlying knowledge so much as how implementation is executed—for example, the most successful approaches to delivering renal dosing of medications to date8 appear to have succeeded because they do a lot of the work in the background and make it easy for the provider to do the right thing, with minimal interruption of workflow.
Some specific types of errors are especially fertile ground with respect to the role of cognition, and they represent areas in which research is urgently needed. Examples include diagnostic errors, errors associated with failure to gather enough information (which is also an important cause of diagnostic error) and errors in the management of patients with multimorbidity. Electronic health records should be able to help with this, but too often today they do not.9 In the Diagnostic Error Evaluation and Research framework proposed by El-Kareh et al, 24 of 30 frequent lesions (by rough count) appear to depend in a major way on cognition, if the cognitive role of the patient is included, which is clearly an important one. Furthermore, all 10 tools these authors propose may make it easier to minimise the likelihood of diagnostic error in an important way.
For some types of harm— for example, falls and pressure ulcers—cognition appears to play a less important role than for many others, but for adverse drug events, for example, the process is complex, recognising the events can be difficult and underdiagnosis is the rule. For all types of harm, a key is recognising the issue, and that involves strong team performance. Patel et al point out that groups are much better at identifying and mitigating errors than individuals, and we need to build on this to develop HIT that leverages the complementary skills of care teams in ways that enable safety improvement.3
Overall, the role of cognition in improving safety has not received enough attention. Information technology is now increasingly being used routinely in healthcare, but it is not necessarily designed yet in ways that will result in major improvements in safety. Perhaps the lowest-hanging fruit from the HIT perspective is the computerisation of prescribing, but this is only one of the many ways that HIT can make healthcare safer. If we want to truly realise the safety benefits of HIT, it will be vital to leverage the contributions of cognitive science, human factors and engineering in refining the way HIT is delivered. And doing this will require additional support for research in this area. This should focus not just on traditional causes of harm—like hospital-acquired infections—but in less well defined and inherently more complex safety issues such as diagnostic errors, the management of patients with multimorbidity and evaluation of patients who may be decompensating.
Acknowledgments
We thank Betty Liu for her assistance with preparation of the manuscript.
Footnotes
Twitter Follow David Bates at @dbatessafety
Competing interests None declared.
Provenance and peer review Not commissioned; externally peer reviewed.
Linked Articles
- Editorial
- Editorial
- Narrative review