Article Text

Download PDFPDF

The pursuit of better diagnostic performance: a human factors perspective
  1. Kerm Henriksen,
  2. Jeff Brady
  1. US Department of Health and Human Services, Agency for Healthcare Research and Quality, Rockville, Maryland, USA
  1. Correspondence to Dr Kerm Henriksen, US Department of Health and Human Services, Agency for Healthcare Research and Quality, 540 Gaither Road, Rockville, MD 20850, USA; Kerm.Henriksen{at}ahrq.hhs.gov

Abstract

Despite the relatively slow start in treating diagnostic error as an amenable research topic at the beginning of the patient safety movement, interest has steadily increased over the past few years in the form of solicitations for research, regularly scheduled conferences, an expanding literature and even a new professional society. Yet improving diagnostic performance increasingly is recognised as a multifaceted challenge. With the aid of a human factors perspective, this paper addresses a few of these challenges, including questions that focus on who owns the problem, treating cognitive and system shortcomings as separate issues, why knowledge in the head is not enough, and what we are learning from health information technology (IT) and the use of checklists. To encourage empirical testing of interventions that aim to improve diagnostic performance, a systems engineering approach making use of rapid-cycle prototyping and simulation is proposed. To gain a fuller understanding of the complexity of the sociotechnical space where diagnostic work is performed, a final note calls for the formation of substantive partnerships with those in disciplines beyond the clinical domain.

  • Diagnostic errors
  • Human factors
  • Information technology
  • Checklists
  • Simulation

this is an open access article distributed in accordance with the creative commons attribution non commercial (cc by-nc 3.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. see: http://creativecommons.org/licenses/by-nc/3.0/

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

A few years ago the issue was raised why diagnostic error had not received much attention compared to other adverse events that were afforded greater patient safety focus.1 Much of the neglect was traced to the Institute of Medicine's (IOM) seminal To Err is Human report.2 The report's overarching take-home message was that preventable adverse events arose from a complex web of system factors, not from the failings of individual clinicians. The report had an immediate media impact. Long aware of the fragmented nature of their profession, many healthcare leaders embraced a systems-oriented approach to patient safety. With the spotlight on recognisable system failures—medication mix-ups, communication lapses and wrong-site surgeries—diagnostic error seemed left in the shadows. Yet diagnostic mishaps involve both system-related and individual components as well as many other factors.

It did not take long, however, for concerned investigators to draw attention to diagnostic error, raise awareness of it, and undertake studies.3–6 Funding agencies and foundations have taken notice. Over the past 7 years, the Agency for Healthcare Research and Quality (AHRQ) in the USA has supported a number of diagnostic error conferences and has posted a special emphasis notice on its web site, soliciting research on diagnostic performance in ambulatory care settings. To further increase awareness, research and education, a new professional society, The Society to Improve Diagnosis in Medicine, was launched by the emerging discipline's thought leaders. Diagnostic error is undeniably gaining respect and recognition as a worthy research domain. However, despite the enthusiasm, challenges remain. Two recent reviews of the literature—one on system-related interventions and the other on cognitive interventions—found a large gap between suggestions and ideas for interventions and those that had been operationalised and tested empirically.7 ,8 To have a lasting patient safety impact, there is a need to candidly confront and critically examine these challenges.

Who owns the problem?

Just as many physicians have viewed system-based failures as an institutional problem, so have healthcare CEOs and administrators viewed diagnostic error as an individual physician matter. Both views are short-sighted. They fail to take into account the reciprocal influences and interdependencies between imperfect humans and their imperfect work environments. With each ceding part of the diagnostic error equation to the other, meaningful communication and collaborative effort are stymied. Both own the problem. Physicians need to be just as concerned about health IT systems that lack interoperability, add complexity to the workflow, and introduce usability issues that threaten patient safety as purchasing officers and system administrators. Likewise, administrators and unit directors should be just as concerned about the host of cognitive limitations and the working conditions that facilitate such limitations as the clinicians themselves. In the absence of a meaningful dialogue and a sense of joint ownership, it should not be surprising if ‘we're doing fine here’ is the mindset.

When knowledge in the head is not enough

A distinction between knowledge in the head and knowledge in the world was made years ago by Donald Norman.9 As a cognitive psychologist, Norman certainly appreciated the information processing and storage capacities for which humans are known, but knowledge in the head isn't always retrievable when needed. When it comes to considering a full range of possibilities that are available for making an optimal diagnosis, the full range does not get considered. To use a term introduced by Simon10 that predates our current use of the term ‘premature closure’, we ‘satisfice’ instead by expending minimal cognitive effort and accepting the first possibility that seems satisfactory. Norman argued that our daily and professional lives could be made much easier and less error laden by putting more knowledge out in the world rather than relying solely on knowledge in the head.

Many of the process errors associated with diagnostic investigations can be reduced by visible and accessible information display and tracking systems for up-dating the status of patients’ referrals, test results and follow-up actions. The lead author recently received a solicitation from a nearby community hospital requesting donations for an electronic whiteboard (census board) and tracking system for their emergency department. While it is an encouraging sign when local hospitals recognise that knowledge in the head is insufficient in today's information-intensive clinical environments, the transition from dry-erase boards to electronic boards has not always taken into account the distributed and social nature of clinical work. If disconnected from regular workflow patterns and the needs of providers, the usefulness of electronic boards likely will be limited.11 ,12

Diagnostic work, like other clinical work, is embedded in a greater sociotechnical system. Rather than limiting our view that improved diagnostic performance means more knowledge in the head or solely to what takes place during a physician–patient encounter, a more encompassing view holds that diagnostic work is distributed across time and place—distributed cognition,13 shared mental models14 and joint cognitive systems15 are a few of the related terms used—and is continuously subject to the direct and indirect effects of multiple interactions among providers, specialists, technicians, patients, test results, artefacts, tools, technology, organisational structures and cultures, and local contextual factors as well as shifting health policy and sentiment.16 In brief, diagnostic work frequently involves more than a ‘between the ears’ revelation. For patients, family members and clinicians, it can be a disjointed journey across confusing terrain, aided or impeded by different agents, with no destination in sight and few landmarks along the way.

Is segregation into camps a good thing?

In much of the emerging literature and conferences to date, cognitive issues and biases (including perceptual and affective biases) and system failures typically are treated as separate entities. This division results from a key question that investigators seem to face. Are efforts to improve diagnostic performance better spent on trying to correct the dispositions to cognitive and affective biases (eg, premature closure, overconfidence and visceral bias17) and other cognitive short-comings or can diagnostic performance more easily be improved by system solutions such as decision support systems that sidestep concerns about cognitive bias?18 Both camps have their advocates and both approaches have less than sterling track records. Convincing demonstrations of effective cognitive debiasing techniques are few and cumbersome decision-support systems that are poorly integrated with the clinical workflow have not gained adoption by busy physicians.

Framing the question this way encourages the needless choosing of camps. While camp life may provide a sense of easy agreement and unity for its members, there are drawbacks for those who become too comfortable in camps.19 The downside can be a distrust of outsiders with alternative views, a disregard for information not compatible with prevailing beliefs, and a lot of self-referential endorsement. Taken collectively, these are not the best qualities for understanding system complexity. Yet humans with their cognitive limitations also are capable of remarkable and adaptable real-life decision making20 ,21 and systems with their glitches progressively get better in terms of functionality, interoperability and usefulness. Neither are going to disappear: both with their strengths and limitations inextricably interact and will continue to impact the diagnostic process.

The putative distinction between ‘cognitive’ and ‘system’ becomes somewhat spurious when one considers the diagnostic work of a busy emergency department with its chaotic mix of system-based, cognitive, affective, perceptual, temporal and variable patient factors. Of course, both cognitive and system variables can be manipulated and tested separately, but with both types of variables interacting in the clinical setting, the interaction term in our analyses should be of just as much interest as the main effects. In fact, in a multi-site survey of primary care physicians on diagnostic challenges, when respondents were asked specifically about the role of cognitive factors, they referenced system and patient factors at the same time.22

The perils of embracing dichotomies too eagerly are ever present—no less for the learned than the uninitiated. Of course, they may serve as useful fictions or labels initially in helping to simplify complex phenomena. But dichotomies tend to assert too much, feeding delusions of understanding when their overuse impedes it. Instead of serving as convenient short-hand labels, they uncritically take on explanatory power, serving as causes rather than consequences. Parsing the world into imperfect humans and imperfect systems, into cognitive versus system-based research approaches, and into system 1 (intuitive) versus system 2 (analytical) modes of thinking,23 ,24 misses much of the human factors work on shared mental models and distributed cognition cited earlier.

What are we learning from health IT?

The leveraging and potential benefits of electronic health records in helping to improve diagnostic performance have been duly noted.25 A few of the possibilities include providing access to the patient's evolving medical history; providing a forward-moving space for documenting patient and clinician assessments, concerns and uncertainties; enabling continuous updating and rearranging of problem lists; providing prompts to aid in the asking of key questions that should not depend on memory; tracking test ordering, results and follow-up with patients; and providing feedback on outcomes given that physicians and organisations lack systematic mechanisms to learn from diagnostic efforts and calibrate their performance.26 But potential benefits are not the same as actual benefits. A recent IOM report on health IT noted that its adoption and widespread use in the USA has been slow.27 At the same time, there is concern that if poorly designed and implemented, health IT can create new hazards and threaten patient safety in a healthcare delivery environment that is already known for its complexity and fragmentation.

Of all the hazard categories identified in a government report that examined health IT hazards, software design and usability issues (eg, difficult information access, difficult data entry, confusing information displays, excessive demands on memory, confusing feedback to user) were mentioned the most (52% and 49%, respectively).28 The hazards that are built unintentionally into our most sophisticated and promising technologies as users interact with them in unkind and unforgiving work environments deserve continued attention. Healthcare organisations, vendors and researchers need to work together, in the spirit of a learning community, on design, usability and implementation issues. Providers need to be involved at the earliest stages of design. As a start in this direction, the US Department of Veteran Affairs has established a usability laboratory to support the rapid prototyping of new health IT designs, formal usability testing and the development of analysis tools to assess existing technologies.29 The results of risk assessments used to identify the unanticipated and unintended consequences of health IT need to be fed back to vendors. Vendors, likewise, may need encouragement and assistance in conducting their own usability testing and risk assessments, and in understanding the broader sociotechnical safety consequences of their products. Beyond accessibility and usability issues that have long been cornerstone concerns of the human factors community, is the greater challenge of using health IT in ways that better educate and empower patients to view themselves as active partners in their own medical histories, diagnostic work-ups and improved care.30

Is there a role for checklists?

While used in other hazardous industries for decades, checklists have found their way into healthcare given successful efforts in reducing bloodstream infections in the intensive care unit, in reducing surgical morbidity and mortality in diverse global settings, and in re-engineering the hospital discharge process to decrease avoidable rehospitalisations.31–35 More recently, papers have appeared calling for the further exploration of their use in diagnostic work.36 ,37 To decrease inappropriate reliance on memory and heuristics and to help curb overconfidence, diagnostic checklist suggestions range from general steps well known to residents but frequently neglected by busy practitioners, to more comprehensive differential lists and those with more critical possibilities that ought to be considered and discounted before a final diagnosis is made.

Checklist development, use and acceptance come with challenges. Development requires a team of individuals or a consensus body that is adept in best practice guidelines and the underlying evidence base, in the realities of clinical work, in measurement and in human factors design principles, and has the perseverance to engage in successive pilot-testing trials and improvements. If put together too rapidly, checklists can be excessively lengthy, ambiguous, devoid of clinical reality and insensitive to the needs of front-line users. Even when well developed and accepted by end-users, there is potential for cognitive drift that repetition, by itself, seems to induce. Tasks that are repetitive and become routine are performed with nominal cognitive resources. If clinicians ‘tune out’ and use checklists in a perfunctory manner, subtle and unexpected cues to the patient's condition may be missed. Checklists are largely based on past failures and reluctant adherence is no substitute for heightened sensitivity to other ways a process can fail (of course, ‘close the barn door’ appears on the checklist once the horse has bolted, but has the owner checked the loose side-planking in the horse's stall?). Finally, investigators who have successfully implemented checklists are quick to tell us that it is not all about the checklist. A prevailing patient safety culture, teamwork, leadership commitment, well-conceived measurement, and attention to implementation, workflow and organisational change issues all need to be carefully aligned before checklists can be properly tested.

So far, checklists have been most successful with discrete, observable tasks—those associated with surgical, central venous catheter and discharge procedures. At the same time, a certain amount of the diagnostic process involves individual mental activity—perceiving, thinking and interpreting—that is less observable.36 Do these mental activities have discernible start- and stop-points for which a checklist could be used? Other diagnostic pursuits have been characterised as ‘wicked problems’16 ,38 where there is no clear end goal or path, where a trusted progression of tests does not exist, where decisions taken lead to new uncertainties, and where tentative solutions with their known and unknown effects are difficult to evaluate and compare. A better understanding of the effective uses and limitations of checklists in diagnostic work is clearly needed.

An engineering tactic to improve the evidence base

Hospitals and primary care offices typically are fluid and dynamic places where interruptions, slips in the schedule and encountering the unexpected are commonplace. While dynamic environments might be ideal for research aims that are facilitated by observational or ethnographic approaches, they are less ideal for testing the effects of an intervention and safely attributing the results to the independent variable of interest without the results also being influenced by contextual and organisational variables over which investigators have little control. Yet there is a need for prospective empirical testing of approaches that aim to improve diagnostic performance. Simulation and systems engineering approaches are gaining use as a test-bed for health IT and medical devices.39–41 One tactic is to engage in rapid-cycle prototyping in a simulated setting to test the various promising features of an intervention's design. Upon assessing the results, improving the prototype and testing again, the test–assess–improve cycle continues until there is satisfaction with the prototype's efficacy (or it is discarded, if there is dissatisfaction). Fail early, not later (at a later stage of development when extensive resources have been encumbered) is the engineering mantra. There are many diagnostic challenges that do not require the actual clinical environment or even a high-fidelity simulation environment, but simply require a requisite level of functional fidelity; that is, require the diagnostician or team to process the same cues, the same variable patient conditions, the same incomplete information and uncertainty while subject to the same constraints and time pressures, make the same decisions, carry out the same actions, and be informed of the same consequences as would occur in the clinical setting.42 ,43

By engaging in an iterative test–assess–improve process in a simulated setting, researchers are more likely to resist the temptation to seek the immediate scientific gratification of comparing a premature intervention with a control group in a resource-intensive clinical setting and coming up with equivocal results.44 However, once there is satisfaction with the intervention's efficacy, there continues to be a need to test the intervention's effectiveness in the environment of use—the ‘flesh and blood’ clinical setting with its noise, time pressures and interruptions. Similar refinements are likely to be needed as work-system and contextual factors that need to be aligned are identified.

A final note

One of the many constructive comments made by the paper's reviewers makes for a fitting parting message. It was observed that most of the funded research and published work on improving diagnoses has come from clinicians. Why has there been a failure to engage with scientists who are expert in human performance, perception, cognition and decision making is the question that was raised. While there are exceptions, they are still exceptions. In other patient safety domains, clinicians and human factors professionals have joined together, forming integrated teams to advance the field. The emerging discipline of diagnostic improvement needs a human factors voice—not just one voice, but a number of voices, and not just from those that reside in camps. Orthodoxy in safety research does not serve anyone well.45 To gain a fuller understanding of the interactivity and complexity of the sociotechnical space where diagnostic work is performed, an opportunity exists for clinicians and their human factors counterparts as well as other disciplines to form substantive partnerships for the long-term work ahead. Until we learn to do this, progress is likely to be less than desired.

References

Footnotes

  • Competing interests None.

  • Provenance and peer review Not commissioned; externally peer reviewed.