Article Text

Download PDFPDF

Obstacles to research on the effects of interruptions in healthcare
  1. Tobias Grundgeiger1,
  2. Sidney Dekker2,
  3. Penelope Sanderson2,
  4. Birgit Brecknell2,
  5. David Liu2,
  6. Leanne M Aitken2
  1. 1Würzburg, Germany
  2. 2Brisbane, Queensland, Australia
  1. Correspondence to Dr Tobias Grundgeiger, Oswald-Külpe-Weg 82, Würzburg 97074, Germany; tobias.grundgeiger{at}uni-wuerzburg.de

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

The authors of The Institute of Medicine report ‘To Err is Human’ concluded that interruptions can contribute to medical errors.1 Given this risk, healthcare researchers have generally, and often solely, viewed interruptions as obstacles to work—as factors that thwart progress, create stress, increase workload, interfere with memory for current and future tasks2 ,3 and harm efficiency, productivity and safety.4 For example, researchers reported a positive association between interruptions and errors.5

A contrasting view is to see interruptions as promoting safety and high-quality patient care. From this view, interruptions function as interventions,6–8 such as a call to cease or change work if the interruptee is potentially committing an error.9 Other industries encourage interruptions for that reason. Many researchers investigating interruptions in healthcare cite the sterile cockpit principle10 as a rationale for reducing interruptions—but it is less often noted that copilots are trained to speak up with safety concerns even if it means interrupting a senior pilot's work.11

These different views on studying interruptions have made it difficult to draw conclusions from the research. Granted, diverse perspectives and methods can generate a greater variety of ideas and solutions than single perspectives and methods.12 However, such diversity also makes it more difficult to compile and compare research results or identify critical research questions. The present paper draws attention to three obstacles to research on the effects of interruptions that arise from differing views and methods: definitions, processes and data collection. We discuss possible solutions that may lead us to a better understanding of the effects of interruptions and to a multidisciplinary view on the effects of interruptions in healthcare.

Definition: what is an ‘interruption’?

The burgeoning literature on interruptions in healthcare offers multiple definitions for what an interruption is.4 ,13 For instance, in her seminal paper, Brixey et al4 reported a wide range of definitions that variously involved work cessation, distraction, direction of attention, communication or task control. In some definitions that Brixey et al4 noted, the interruptor was both the focus and origin of the operationalisation; in others, the focus was the interruptee; and in still others, it was their joint relationship or interaction. These differences in how an ‘interruption’ is defined can lead to uninformative and disparate comparisons of interruption frequencies.14 ,15 Furthermore, some studies do not offer an exact definition of an interruption.16 ,17

Diverse definitions of interruptions make it difficult to compare studies, but the diversity is inevitable. There are many motivations for investigating interruptions and researchers need to define what is being observed. Researchers may investigate the communicative function of interruptions,18 the resumption of interrupted tasks,19 the effect of distractions on the performance of surgeons20 or the relationship between interruptions and errors.5 There can be different disciplinary views on exactly what process the interruption is interrupting; it could be a communicative process, a cognitive process, case progression or some other phenomenon. Therefore, the definition of interruptions that is used will depend on the research question and the processes that are being investigated.

Even if it is accepted that different research questions lead to different definitions of distractions, multitasking and interruptions, we need a clear statement of the definition being used in a study before we can draw coherent conclusions about the prevalence and nature of these phenomena across studies.

Processes: what is affected by interruptions and what are the consequences of interruptions?

Viewing interruptions as either obstacles to work or as positive interventions also results in different views on what is affected by interruptions. We have argued elsewhere13 that given the evidence to date, interruptions in and of themselves probably do not cause errors in healthcare tasks: an interruption is neither necessary nor sufficient to cause an error. However, interruptions may contribute to errors by increasing the likelihood that, for example, workers forget tasks, delay procedures or experience cognitive overload. In other words, interruptions have consequences for specific processes that might be cognitive in case of workers forgetting tasks or organisational in case of procedures being delayed. We present three cases that highlight the diversity of the processes that are affected by interruptions.

First, researchers sometimes mention cognitive processes when they introduce a study but they do not consider cognitive processes in the data collection21 or cite literature on cognitive processes, but the literature is not used to guide the definition of a distraction or interruption and thus does not influence the choice of independent variables.22 Researchers may cite literature on how people remember previous task steps after an interruption, but when the potential relationship between interruptions and harm to patients is tested, no independent evidence is collected for factors potentially influencing the memory for interrupted task steps.

A second case is the failure to consider organisational processes affected by interruptions, often coupled with the default assumption that interruptions are an inherently undesirable form of communication. Many studies report interruption frequencies and sources,13 but fewer studies address the organisational or clinical value of the transmitted information for the interruptee, the interrupter or both. One exception is Sasangohar et al18 who addressed the content transmitted by an interruption and estimated possible effects of that content on information flow and the progression of patient care tasks.

A third case is the failure to consider clinical processes, such as progress in a resuscitation, procedure or treatment regimen. In such cases, the medical consequences for the patient might be the focus of studying interruptions.23

The above examples show that the processes that are affected by interruptions and distractions can be diverse. It is important to identify and consider these processes because they describe potential mechanisms by which interruptions may or may not have an impact. They point to factors that may mediate between distractions or interruptions and their consequences, and that could be captured during data collection. Research that does not investigate these processes is likely to remain descriptive, rather than explanatory, and therefore limited in the recommendations about the management of interruptions.

Data collection: who identifies the interruptions?

The profession, and professional experience, of the data collector influences what data are collected and how. Even within a single study with one research question, interruptions may be counted or measured differently, depending on who is conducting the observations. Potter et al6 found that in the same sample of observations, a human factors researcher counted 261 interruptions, whereas a nurse counted 151. Similar discrepancies have been found in research on error counting where practitioners and observing psychologists not only came up with different error counts, but labelled entirely different sets of events as ‘errors’.24

In an unpublished pilot study, we collected data on interruptions experienced by doctors, nurses and administrative staff in the same intensive care unit (ICU) using direct observation by human factors researchers and a diary (self-report). The same definition of ‘distraction’ was used for both arms of the study. As in the earlier Potter et al6 study, the distraction rate suggested by practitioners completing diaries was far lower than the distraction rate recorded by the human factors researchers observing work (but others have reported comparable interruption counts2). This pattern of results was found across all ICU roles sampled. Furthermore, the distraction rates from the direct observations of nurses were similar to the distraction rates from an eye tracking study19 conducted with nurses in the same unit 2 years earlier—using the same definition of distractions, but applied by human factors researchers using more comparable methods.

The differences may be partly due to different levels of opportunity across the different data collection methods to record the events of interest, but also to differences in how the phenomenon of interest is understood or experienced by the person doing the recording. For example, if a co-worker interrupts a clinician to present case-relevant information, clinical health professionals may not consider this an interruption: it progresses the case. But a human factors specialist focusing on task resumption may see it as an interruption because the clinician breaks off from performing their present task. Practitioners may define interruptions more in terms of interruption of case progression or workflow23 than in terms of continuities or discontinuities in individual work tasks. Indeed, some research on distractions and interruption in surgery has explicitly20 or implicitly25 adopted such an ‘interrupted case progression’ view.

Importantly, the data collection method needs to be appropriate for the research question and sensitive to the actual or potentially affected processes. For example, if participants provide a self-report at the end of the day about whether they forgot to start or complete a planned task at any point during the day, they can only report the forgotten tasks that they later remembered or that were subsequently pointed out to them.2 A faithful record of all forgotten tasks or of the frequency of interruptions might require direct observation. Self-reporting may be appropriate if samples of communicated content are of interest.

Our point is not that studies of interruptions should improve inter-rater agreement, although inter-rater agreement is important.26 Instead, our point is that there needs to be a fit between the way interruptions are conceptualised and the investigative method used. As a result, the ability to make direct comparisons between studies cannot always be expected.

In summary, no data collection method, no conceptualisation and no definition of an interruption is per se better than another. However, the method of data collection needs to fit the research question and needs to be sufficiently sensitive to capture the consequences of interruptions (ie, the affected processes). In some cases, multiple methods or studies may be needed before a definitive answer can be achieved about the phenomenon of interest.27

Conclusions

The obstacles that arise from the different views on interruptions noted should not be handled independently. Rather, given a specific conceptual framework and research question, the working definition of distractions and interruptions should guide the method of data collection.

One benefit of stating the conceptual framework, a clear research question, and a definition, is that it becomes less likely that studies addressing different questions or using different definitions will be compared directly. Ironically, this may also make it more difficult to aggregate research findings across studies, even when the same environment is being investigated, but this is not a drawback if it prevents inappropriate comparisons from being made. Nonetheless, we suggest that within specific conceptual frameworks with similar research questions, such as those relating to communication, memory or clinical progress, using the same definition should enable the accumulation of knowledge.

Furthermore, a broader focus on the cognitive, organisational or clinical processes affected by interruptions may lead to multidisciplinary views that advance our understanding more rapidly. For example, studies that report statistical associations between interruption and error counts5 or that manipulate distractions or interruptions and then count errors22 might provide evidence for a relationship between interruptions and errors. Such studies provide important findings, but they do not provide unambiguous evidence for the processes that might be mediating the relationship. Such limitations make it difficult to suggest appropriately targeted countermeasures. Developments in this area are welcome.26

A broad view is needed on the processes that might be affected and might (or might not) mediate a connection with patient harm. For example, as Grundgeiger et al19 found, cognitive processes that were initially identified in controlled laboratory studies might have only limited relevance for healthcare settings—a limitation observed in other domains.28 Clinicians confronted with many interruptions will exploit opportunities to actively adjust their work arrangements, rather than continue to expose themselves to high memory load and the associated risk of forgetting uncompleted tasks. Processes affected, and therefore in need of study, would therefore include memory processes and work adaptation.

In summary, research on distractions and interruptions in healthcare has received much attention in the recent years. Diverse views and methods have led to a growing body of findings, but their combined scientific leverage is limited. We argue that identifying the sources of this diversity will promote a better understanding of the effects of interruptions and harness evidence more effectively about what—if anything—should be done about interruptions.

References

Footnotes

  • Contributors All authors were involved in defining the content and scope of the paper. TG and SD wrote the first draft of the manuscript. BB collected and interpreted the field and observation data reported. PS prepared the numerical data in its present format and contributed to refinements of the ideas. All authors contributed to amendments of the manuscript and approved the final version.

  • Funding Princess Alexandra Hospital Foundation Grant and Australian Research Council (DP0880920 and DP140101821).

  • Competing interests None declared.

  • Ethics approval Human Research and Ethics Committees of The Princess Alexandra Hospital (HREC/13/QHC/357 and HREC/13/QHC/361).

  • Provenance and peer review Not commissioned; externally peer reviewed.