Article Text
Statistics from Altmetric.com
Introduction
Our information machines exist to make us faster, more powerful decision-makers. Computers prompt our limited human memory with reminders of what we should be doing. They retrieve information we could never remember or indeed even know. They suggest solutions to complex problems for us and take over the many routine tasks that we delegate to them. Information technology (IT) is thus a cognitive prosthesis that enhances our abilities beyond the unaided human norm.1
Unless a decision process is entirely automated, it is the product of the technology, the human user and how well each fits the other. Weed famously saw this act of using IT as one of ‘knowledge coupling’ between human and machine.2 It is the quality of this interaction that counts in the end, and not the quality of the elements in isolation.3
The first test of our interaction with IT should be whether it leads to better, and quicker, decisions. Well-designed interactions with IT should also ensure that our decisions are as safe as possible. Poorly designed interactions unfortunately can distort decision-making and create new types of hazards and errors, ending in patient harm.4 Indeed, there is a steadily growing evidence base that confirms that this harm is real, widely prevalent and that its consequences for patients can be significant, sometimes fatal.5 ,6
The evidence base also clearly shows that human factors are a major contributor to IT-associated errors and harms.7 There is thus an imperative to design clinical information systems that are both demonstrably safe in construction and in use. For this to happen, we must move from empirical observation of IT-related hazards, errors and harms to a theory-based understanding of the causes of these risks and their mitigation.
In a thoughtful review of what we know about the …