In their paper 'The science of human factors: separating fact from
fiction', Russ et al present a description of the human factors (HF)
discipline, and discuss several cases where the science of HF has been
misapplied in healthcare [1].
On examining some of the examples of misapplication they provide, it
became apparent that in most cases the term 'human factors' was used to
describe factors relating to human behavior (e.g. communication), rather
than the scientific discipline [2, 3]. The research did not purport to
adopt a HF methodology or stance. Are these really misconceptions about HF
science?
Russ et al also provide examples of studies that refer to HF science
but emphasize the failures of people. They describe this research as
'counterproductive' but the work they cite adopted HF methods and exposed
some interesting aspects of human behaviour. For example, consultation
with clinicians revealed that user acceptance of technology was critical
for successful implementation of electronic medication management [4]. In
another study (of which I am an author), review of medication charts
revealed that misuse of an electronic prescribing system was associated
with the generation of unnecessary computerized safety alerts [5]. We
concluded that both system design and inadequate training may have
contributed to system misuse.
In their viewpoint, Russ et al, discuss training at some length and
provide an overview of where training is an appropriate versus
inappropriate HF technique for improving patient safety [1]. This
discussion interested me as their table (Table 1) referred to few studies
examining the effectiveness of training. They explain that training is not
appropriate if it is designed to address a type of error committed by
multiple users, as wide-spread error indicates a mismatch between system
design and human characteristics. Identification of mismatch between
design and human capabilities/limitations is at the crux of the HF
discipline and is undoubtedly an important undertaking. But is it not also
possible that all users received the same (ineffective) training, and so
all made the same types of error? In the same way, Russ at al suggest that
training is not appropriate when the goal is for individuals to stop using
technologies in the wrong way. But can it not be that correct use of the
system was not effectively demonstrated during training, and so users were
not aware that more efficient use was possible?
I agree with Russ et al in that additional training should only be
considered following an evaluation of system design, but what if design is
intended to break free from previous iterations, with the aim of
transforming or revolutionizing a task? There exists a tension between
designing systems that replicate current processes and so integrate
quickly into clinical practice versus designing systems that allow tasks
to be completed in more efficient ways, but which require a change in work
and cognitive processes and so necessitate a greater level of training.
Russ et al were quick to criticize previous research but by taking a
closer look there is value in all HF applications to healthcare.
References:
1 Russ AL, Fairbanks RJ, Karsh B-T, Militello LG, Saleem JJ, Wears
RL. The science of human factors: separating fact from fiction. BMJ
Quality & Safety. 2013 April 16, 2013.
2 Cahan MA, Starr S, Larkin AC, Litwin DM, Sullivan KM, Quirk ME.
Transforming the culture of surgical education: Promoting teacher identity
through human factors training. Archives of Surgery. 2011;146(7):830-4.
3 Rosenstein AH, O'Daniel M. Impact and Implications of Disruptive
Behavior in the Perioperative Arena. Journal of the American College of
Surgeons. 2006;203(1):96-105.
4 Abrams H, Carr D. The Human Factor: Unexpected Benefits of a CPOE
and Electronic Medication Management Implementation at the University
Health Network. Healthcare Quarterly. 2005;8(Sp):94-8.
5 Baysari MT, Reckmann MH, Li L, Day RO, Westbrook JI. Failure to
utilize functions of an electronic prescribing system and the subsequent
generation of 'technically preventable' computerized alerts. Journal of
the American Medical Informatics Association : JAMIA. 2012 Nov
1;19(6):1003-10.
Conflict of Interest:
None declared
In their paper 'The science of human factors: separating fact from fiction', Russ et al present a description of the human factors (HF) discipline, and discuss several cases where the science of HF has been misapplied in healthcare [1].
On examining some of the examples of misapplication they provide, it became apparent that in most cases the term 'human factors' was used to describe factors relating to human behavior (e.g. communication), rather than the scientific discipline [2, 3]. The research did not purport to adopt a HF methodology or stance. Are these really misconceptions about HF science?
Russ et al also provide examples of studies that refer to HF science but emphasize the failures of people. They describe this research as 'counterproductive' but the work they cite adopted HF methods and exposed some interesting aspects of human behaviour. For example, consultation with clinicians revealed that user acceptance of technology was critical for successful implementation of electronic medication management [4]. In another study (of which I am an author), review of medication charts revealed that misuse of an electronic prescribing system was associated with the generation of unnecessary computerized safety alerts [5]. We concluded that both system design and inadequate training may have contributed to system misuse.
In their viewpoint, Russ et al, discuss training at some length and provide an overview of where training is an appropriate versus inappropriate HF technique for improving patient safety [1]. This discussion interested me as their table (Table 1) referred to few studies examining the effectiveness of training. They explain that training is not appropriate if it is designed to address a type of error committed by multiple users, as wide-spread error indicates a mismatch between system design and human characteristics. Identification of mismatch between design and human capabilities/limitations is at the crux of the HF discipline and is undoubtedly an important undertaking. But is it not also possible that all users received the same (ineffective) training, and so all made the same types of error? In the same way, Russ at al suggest that training is not appropriate when the goal is for individuals to stop using technologies in the wrong way. But can it not be that correct use of the system was not effectively demonstrated during training, and so users were not aware that more efficient use was possible?
I agree with Russ et al in that additional training should only be considered following an evaluation of system design, but what if design is intended to break free from previous iterations, with the aim of transforming or revolutionizing a task? There exists a tension between designing systems that replicate current processes and so integrate quickly into clinical practice versus designing systems that allow tasks to be completed in more efficient ways, but which require a change in work and cognitive processes and so necessitate a greater level of training. Russ et al were quick to criticize previous research but by taking a closer look there is value in all HF applications to healthcare.
References:
1 Russ AL, Fairbanks RJ, Karsh B-T, Militello LG, Saleem JJ, Wears RL. The science of human factors: separating fact from fiction. BMJ Quality & Safety. 2013 April 16, 2013.
2 Cahan MA, Starr S, Larkin AC, Litwin DM, Sullivan KM, Quirk ME. Transforming the culture of surgical education: Promoting teacher identity through human factors training. Archives of Surgery. 2011;146(7):830-4.
3 Rosenstein AH, O'Daniel M. Impact and Implications of Disruptive Behavior in the Perioperative Arena. Journal of the American College of Surgeons. 2006;203(1):96-105.
4 Abrams H, Carr D. The Human Factor: Unexpected Benefits of a CPOE and Electronic Medication Management Implementation at the University Health Network. Healthcare Quarterly. 2005;8(Sp):94-8.
5 Baysari MT, Reckmann MH, Li L, Day RO, Westbrook JI. Failure to utilize functions of an electronic prescribing system and the subsequent generation of 'technically preventable' computerized alerts. Journal of the American Medical Informatics Association : JAMIA. 2012 Nov 1;19(6):1003-10.
Conflict of Interest:
None declared