Article Text

Download PDFPDF

Technology, cognition and error
  1. Enrico Coiera
  1. Correspondence to Professor Enrico Coiera, Centre for Health Informatics, Australian Institute of Health Innovation, Macquarie University, Sydney, NSW 2109, Australia; enrico.coiera{at}mq.edu.au

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Our information machines exist to make us faster, more powerful decision-makers. Computers prompt our limited human memory with reminders of what we should be doing. They retrieve information we could never remember or indeed even know. They suggest solutions to complex problems for us and take over the many routine tasks that we delegate to them. Information technology (IT) is thus a cognitive prosthesis that enhances our abilities beyond the unaided human norm.1

Unless a decision process is entirely automated, it is the product of the technology, the human user and how well each fits the other. Weed famously saw this act of using IT as one of ‘knowledge coupling’ between human and machine.2 It is the quality of this interaction that counts in the end, and not the quality of the elements in isolation.3

The first test of our interaction with IT should be whether it leads to better, and quicker, decisions. Well-designed interactions with IT should also ensure that our decisions are as safe as possible. Poorly designed interactions unfortunately can distort decision-making and create new types of hazards and errors, ending in patient harm.4 Indeed, there is a steadily growing evidence base that confirms that this harm is real, widely prevalent and that its consequences for patients can be significant, sometimes fatal.5 ,6

The evidence base also clearly shows that human factors are a major contributor to IT-associated errors and harms.7 There is thus an imperative to design clinical information systems that are both demonstrably safe in construction and in use. For this to happen, we must move from empirical observation of IT-related hazards, errors and harms to a theory-based understanding of the causes of these risks and their mitigation.

In a thoughtful review of what we know about the genesis of error and patient harm,8 Patel and colleagues make abundantly clear that we must understand deeply the interplay between human cognition and error. That exploration should also encompass machine reasoning and human–computer interaction.

In the remainder of this paper, the way that the interplay between cognition and IT can lead to error and patient harm is first reviewed. The second part of the paper considers how such an understanding can shape our design of safer interactions with IT, and indeed how we can harness this technology class to minimise IT-related risks. Both themes are areas of research and practice that surely must become a major new focus for patient safety if we are to neutralise this potent and increasingly pervasive source of patient harm.9

The role of IT in the genesis of error

While our capacity to design safe interactions with clinical IT is still rudimentary, we do know enough about error, cognition and technology to identify a number of research priorities. For example, disruptions to memory, cognitive overload and cognitive biases can all in different ways impair our interaction with this technology. Other major sources of disruption include both IT systems that are not designed to reflect the cognitive processes underpinning clinical work, as well as the resulting workarounds that arise as humans try to circumvent the limitations of IT.

Multitasking, interruption and cognitive load

Adverse events can occur when the available cognitive resources such as memory are insufficient for the task at hand. This may occur because our attention is divided among a number of tasks. If a clinician is distracted or interrupted with a new task, or is multitasking, then memory processes can be disrupted by this excessive cognitive load and lead to errors in task execution.10 For example, after being interrupted while creating a medication order in an electronic prescribing system, a clinician may return to the primary task but select the wrong medication, dose or indeed create the order in a different patient's record.

Regrettably current generation clinical ITs are designed with the implicit assumption that their users are carrying out a single task and that their attention is devoted entirely to the interaction with the technology. What is really needed are clinical systems that are designed to be both tolerant of multitasking and interruption, and that can support recovery from these events.

Environmental memory cues, for example, can enhance an individual's capacity to recover from interruption.11 When calculating a drug dose on paper, the paper acts as a cue to help a clinician re-engage with the task after an interruption, both recalling their position in the task sequence and recording intermediate calculations and initial data. Clinical information systems can be designed in a similar way. User interfaces should make it clear what the current tasks are, where the user is up to in each and display any intermediate calculations, decisions or data. Systems that provide such cognitive cues should be better suited to busy and interruptive clinical environments.12

Automation biases

IT use can be compromised by the many cognitive biases we know affect all human decision-making. Biases such as the anchoring, adjustment and representativeness heuristics, and information presentation order effects all can lead to decisions that do not reflect the available evidence.13 Thus both clinicians and consumers can misinterpret data presented to them by information retrieval systems because they interpret new information through the lens of prior belief. Related factors that shape how information is viewed include the order in which documents are accessed (the order effect) and the amount of time spent viewing documents (the exposure effect).14 One consequence of these biases is that clinicians and consumers can be swayed by information presentation effects into switching from a correct to an incorrect decision.15

Automation bias or automation-induced complacency is a very specific bias associated with computerised decision support and monitoring technologies.16 For example, when using a decision support system, a user can make either errors of omission (they miss events because the system did not prompt them to take notice) or errors of commission (they did what the decision system told them to do, even when it contradicts their training and available data).

There are many possible explanations for automation bias. It has been suggested that when humans delegate tasks to a computer system they may also shed task responsibility. Computer users may then take themselves out of the decision loop and develop an ‘out of loop unfamiliarity’ with the system they are meant to be monitoring.17 If an urgent event occurs, recovering from loop unfamiliarity requires additional time and cognitive resource to obtain the necessary understanding of all the variables required to make a decision, or situational awareness. In contrast, without a decision aid, a human has no choice but to maintain an active mental model of the state of any system being monitored.

Recent evidence suggests that explicit training in automation bias has a short-term benefit only. Making individuals personally accountable for the consequences of their decisions, however, does seem to reduce automation bias. For example, if individuals are told that their actions are socially accountable because the data of their performance are being recorded and will be shared with others, then more time is spent verifying the correctness of a decision support system's suggestions and leads to fewer errors.18 Reducing automation bias may thus have its solution both in specific training programmes for the use of IT and changes to user interface design that make it easier to stay ‘in the loop’.

Information system design may not reflect real-world use

If IT designers have a poor understanding of clinical work, they can make incorrect assumptions about the mental models, cognitive load and concurrent tasks of users. Design that does not reflect users and their work has a number of substantive consequences:

  • Inadequate or poorly designed user interfaces can add unwanted complexity, unnecessary additional work and create new opportunities for error. Poor usability can thus make a system hard to learn, difficult to recall after a period of not using the system or simply inefficient.19 If there are too many options in a drop-down drug list, for example, or they are counterintuitively arranged, patients may be prescribed the wrong drug or dose through a ‘pick list error’. While such an action is classified as a ‘use error’, triggered by the commands provided by a user, it is poor system design that often creates the hazardous circumstance that predispose the error. Risks are also increased when systems do not facilitate recovery from use errors, for example, when an order entry system does not allow clinicians to modify or cancel an order once it is placed.

  • Incomplete or incorrect assumptions about clinical tasks and mental models also create hazards. Patients might continue to receive medications because an order entry system incorrectly assumes that orders will not need to be changed once made, and thus does not support discontinuation or modification of orders. Errors are also generated when there is a mismatch between the system and the mental model of users. For example, an Electronic Medical Record might display weight in pounds when clinicians work in kilograms.

  • Mismatches between system workflow and clinical workflow can also lead to use errors. For example, reviewing a medication list at the time of administration assists in error detection. This workflow is disrupted when medication information must be accessed at a central workstation and not at the patient's bedside. Errors also occur when system design does not match the expected sequence in which clinical tasks are carried out. For example, medication decision support is likely to be more effective if it occurs when a clinician is still formulating a treatment plan, and less so at the time of writing the prescription (which is the norm in current systems).20

Catching design errors requires effort during system development, through rich interaction with users and their workplace. It also requires post-implementation surveillance to detect the almost inevitable existence of unanticipated consequences of IT21—much like postmarket surveillance for new drugs.

Post-implementation adaptation of IT creates patient risks

The socio-technical nature of IT means that the technology and the context within which it is used cannot be separated.22 Implementation science tells us that the work needed to fit IT to a given work context varies from organisation to organisation. The variations we find in the effectiveness and safety of IT across different settings are partly due to the necessary implementation differences between these contexts.23

Safety issues are also known to arise from the post-implementation response of an organisation to new information systems, which include workarounds. These responses may either be user-initiated ‘repairs’ to a workflow that does not fit current needs or are triggered when workflows change around an unchanging installed technology.24

Some workarounds exploit existing software functionality to execute tasks in ways unanticipated by the system designers. A time-poor clinician might thus use the cut and paste features of an electronic record as a workaround to copy text from a clinical note and use it to create a new duplicate entry. While it saves time, cut and paste also creates quality and safety issues, for example, incorrectly recording that patient observations were taken, when they were not.25

Other workarounds completely bypass IT, creating parallel workflows that circumvent the workflow as designed. One well-documented IT workaround comes from a medication administration system that used wristband barcodes to identify patients. This workaround allowed nurses to collect medications for multiple patients from a medication cart when they should have been only servicing one patient at a time. The workaround saved time walking back and forward between cart and patient by affixing copies of patient barcodes to desks, scanner carts, doorjambs, supply closets, clipboards, nurse belt loops or even their arms, allowing multiple patients to be scanned at once.26 The obvious risk with this workaround is that the wrong medication will be given to a patient, exactly the opposite of the intent of the system design.

Designing safer information systems

Improvements in the design of information systems, and specifically the design of human–computer interactions, are clearly a necessary response to the growing evidence for IT-related harm. Many such examples of such changes have been provided in the previous section. There are also two more systemic opportunities to manage IT risks, and both are currently not well understood or routinely exploited. The first is to harness IT to undertake surveillance of the processes and decisions in an organisation, given that the role of IT in many errors only becomes clear when they are considered as a group, and not individually. The second is to not see IT implementation as a technical process of installation of a technology into an organisation, but rather see it as one of fitting IT to users and their workflows—implementation is redesign.

IT's role in hazard and error detection

Patel and colleagues8 emphasise that error detection and recovery are integral to the creation of a safe clinical workplace. Given IT's central role in the collation and analysis of clinical data, information systems have a significant role to play in identifying hazards and errors, as well as guiding the system towards safer areas of behaviour.

Incident reports are a cornerstone of patient safety as they provide frontline accounts of hazards and actual harms.27 Unfortunately such reports are often not acted upon in a timely fashion, often because there are so many of them to respond to. Reports may also be incorrectly labelled, and reports of similar events may be described or classified in different ways, minimising the signal in the data. Simple computer text mining methods appear to be a powerful way of improving this situation. Incident reports can be classified automatically for severity and class with great precision.28 ,29 The widespread use of such technology should allow near real-time alerting of ‘outbreaks’ of severe or clustered events, and assist health services in prioritising their responses.

It is also possible to develop statistical profiles of the usual behaviour of clinical services, reflecting, for example, the typical volumes of events such as prescriptions or test orders, as well as the expected frequency of different event types. Automated monitoring of these process trails within clinical information systems can trigger alerts when there are deviations beyond normally accounted for variation. Such deviations may signal new systemic hazards such as failure or error in a particular clinical IT system, as well as help study how such hazards evolve over time.30

These methods also have application in improving the safety of care more broadly. Just as there can be organisational profiles of routine behaviour, we can extend such ‘safety envelopes’ to individual patients. There is, for example, a well-known increase in the risk of death associated with weekend admission.31 Data mining methods allow us to identify those patients most at risk of increased death and help separate those patients who are likely to be at greater risk because of disease acuity from those who are at risk of death because of reduced service availability.32 Such methods can be used to build predictive models that can calculate the incremental risk of iatrogenic harm through continuing ‘exposure’ to care.33 We can also compare the electronic record of an individual patient to a ‘virtual cohort’ of similar patients, to help predict which clinical interventions are most likely to benefit a specific patient and avoid those that harm.34

At the level of the decision-maker, providing feedback about their practice compared with a standardised cohort of clinicians can help signal when an individual's practice varies significantly from norms—perhaps justifiably so. Clinical feedback appears to be a powerful intervention to improve the quality and safety of clinical care,35 and may have a very powerful cognitive basis. Historically, cognitive scientists have identified a wide variety of cognitive ‘biases’, and as Patel and colleagues point out, the nature of these individual biases is sometimes unclear.

Recent research in psychology suggests that most decision biases are caused by a misalignment between event samples known to the individual and the true sample of events. In the decision by sampling model, our personal judgements are shaped by our personal sample of the available evidence, drawn from memory or the environment.36 As our personal sample of events is typically small and unrepresentative of the total distribution, our decisions are equally skewed. Such distortions are common in human assessments of health risks, where individuals play down risks associated with behaviours such as smoking, drinking or exposure to HIV. Information systems that provide clinicians with feedback on their behaviour probably work by recalibrating each individual's estimate of how normal their decisions are for typical patients. Similarly, providing individuals with tools to engage in sense-making of data can assist in de-biasing their decisions and better reflecting the underlying information presented to them.15

Harm minimisation through complexity reduction in system design and implementation

System implementation is an often-difficult step in the lifecycle of technology. Many have observed that even when a system works well when first evaluated it may perform very differently in the sites where it is later implemented. Implementation may require far greater expense and effort because the technology does not fit in as easily as expected with existing processes and systems.37 Implementation should thus be considered an adaptive process that may require both a process of construction—building the necessary components to allow new and pre-existing work elements to interoperate—and customisation—the localisation or tuning of components and processes to the special needs of an organisation or process. Failure to understand the adaptive nature of implementation is no doubt one of the main reasons health IT systems flounder post-installation.

System complexity is a natural source of hazard as it increases the likelihood of unanticipated interactions between the components in any IT system. A common consequence of poor adaptation during IT implementation is increased complexity in the workflow.13 Yet Patel and colleagues8 make the strong case that such workplace complexity leads to impaired cognition and error.

Safe systems typically minimise complexity by emphasising modularity in design, which by its nature constrains any interactions between components to within their own local module. We can see the benefits of design modularity in the way tightly coupled interventional bundles improve patient safety.38 IT provides us a means to modularise clinical work, for example, around agreed order sets, care plans and clinical protocols.

Safe IT design also emphasises the creation of defences such as redundant components within a system.39 Failure in one system component then need not lead to harm if other elements are designed to step in, either to provide a cross check (eg, through alert generation) or as a substitute (such as a reminder for a missed action).

The implementation work required when new information systems are installed also provides an opportunity for redesign and optimisation of existing clinical processes. If well executed, such redesign can emphasise complexity reduction and system redundancy. Indeed, we can view the introduction of technologies such as a decision support system as one of complexity reduction for the decision-maker.40 A good test for whether or not such technology is likely to be effective is whether at the end of the redesign process the human task has indeed become less complex.

Unfortunately complexity of work practice has a natural tendency to increase with time. Redesigns, perhaps prompted by new IT, are natural points in the evolution of complexity where it can be checked. A more sustainable strategy, however, is to make complexity reduction a continuous task. Clinical processes, work practices and their supporting technologies probably need to be designed with a ‘use-by’ date. In biology, cells have built-in checks for obsolescence, and through the process known as apoptosis, are triggered to die in programmatic ways. Rather than accrete increasingly complex IT in our workplaces that impede cognition and decision-making, we probably need to actively consider information apoptosis as an essential strategy to minimise complexity.41

Conclusion

Cognition and error form a crucial nexus that we must understand if patient care is to be as safe as possible. In their review of cognition and error, Patel and colleagues provide us with an ambitious research agenda to explore this nexus. Cognitively driven research not only helps us understand why errors occur, but will also help us design interventions to minimise and recover from such risks.

This nexus has an additional component, and that is IT. While much is known about safe IT design and use, the translation of this research evidence into the designs of routinely available information systems is slow. There is a similar gap between what we know to be good implementation practice and actual implementations. For example, the likelihood of successful and safe IT implementations appears greater with incremental and steady system roll-out rather than big bang approaches and genuine engagement with clinical users, as well as the devotion of significant resources to training.9

We thus are faced with a substantial challenge. IT is a crucial instrument in our journey to make healthcare delivery safer.42 We are unlikely to achieve our safety goals without it. Yet our skills at designing and implementing IT are still at a stage where we can cause harm. The growing calls for better regulation of IT design, implementation and use are in part a response to the need to take this challenge seriously.43 Given the ever-growing demands for efficiency, engagement with an ever-changing evidence base and reduced resources within the health system, it is however hard to imagine modern clinical practice without our cognitive prostheses.

Acknowledgments

This work was supported by funding from NHMRC Program grant 568612 and the NHMRC Centre for Research Excellence in E-health.

References

Footnotes

  • Competing interests None.

  • Provenance and peer review Not commissioned; externally peer reviewed.

Linked Articles