Article Text

Helping healthcare teams to debrief effectively: associations of debriefers’ actions and participants’ reflections during team debriefings
  1. Michaela Kolbe1,2,
  2. Bastian Grande1,3,
  3. Nale Lehmann-Willenbrock4,
  4. Julia Carolin Seelandt1
  1. 1 Simulation Centre, University Hospital Zurich, Zurich, Switzerland
  2. 2 ETH Zürich, Zurich, Switzerland
  3. 3 Institute of Anesthesiology, University Hospital Zurich, Zurich, Switzerland
  4. 4 University of Hamburg, Hamburg, Germany
  1. Correspondence to Dr Michaela Kolbe, Simulation Centre, University Hospital Zurich, Zurich 8091, Switzerland; mkolbe{at}ethz.ch

Abstract

Background Debriefings help teams learn quickly and treat patients safely. However, many clinicians and educators report to struggle with leading debriefings. Little empirical knowledge on optimal debriefing processes is available. The aim of the study was to evaluate the potential of specific types of debriefer communication to trigger participants’ reflection in debriefings.

Methods In this prospective observational, microanalytic interaction analysis study, we observed clinicians while they participated in healthcare team debriefings following three high-risk anaesthetic scenarios during simulation-based team training. Using the video-recorded debriefings and INTERACT coding software, we applied timed, event-based coding with DE-CODE, a coding scheme for assessing debriefing interactions. We used lag sequential analysis to explore the relationship between what debriefers and participants said. We hypothesised that combining advocacy (ie, stating an observation followed by an opinion) with an open-ended question would be associated with participants’ verbalisation of a mental model as a particular form of reflection.

Results The 50 debriefings with overall 114 participants had a mean duration of 49.35 min (SD=8.89 min) and included 18 486 behavioural transitions. We detected significant behavioural linkages from debriefers’ observation to debriefers’ opinion (z=9.85, p<0.001), from opinion to debriefers’ open-ended question (z=9.52, p<0.001) and from open-ended question to participants’ mental model (z=7.41, p<0.001), supporting our hypothesis. Furthermore, participants shared mental models after debriefers paraphrased their statements and asked specific questions but not after debriefers appreciated their actions without asking any follow-up questions. Participants also triggered reflection among themselves, particularly by sharing personal anecdotes.

Conclusion When debriefers pair their observations and opinions with open-ended questions, paraphrase participants’ statements and ask specific questions, they help participants reflect during debriefings.

  • continuous quality improvement
  • crew resource management
  • human factors
  • medical education

Data availability statement

Data are available on reasonable request. Data are available from the corresponding author on reasonable request.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

WHAT IS ALREADY KNOWN ON THIS TOPIC

  • Debriefings help teams learn quickly and treat patients safely. However, despite many available debriefing tools, clinicians and educators report to struggle with leading debriefings and little empirical knowledge on optimal debriefing processes is available.

WHAT THIS STUDY ADDS

  • We used group interaction analysis to observe clinicians while they participated in healthcare team debriefings in a simulated setting. Using DE-CODE, we found evidence for the immediate effectiveness of selected types of debriefer and participants communication; for example, when debriefers combined advocacy (sharing an observation and a respective opinion) with open-ended questions, it triggered participants’ reflection.

HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY

  • When debriefers pair their observations and opinions with open-ended questions, paraphrase participants’ statements and ask specific questions, they help participants reflect during team debriefings.

Background

Debriefing among healthcare providers following stressful and complex situations during both clinical work and training is an educational, team learning and patient safety intervention.1–6 It is a guided conversation among participants that aims to explore and understand the relationships among events, actions, thought and feeling processes, and performance outcomes of a clinical or simulated situation.7–11 Debriefing is based on mutual reflection of clinical practice and a core part of simulation-based training and clinical rehearsal.12 It is a low-cost learning opportunity designed for multiple forms of healthcare teams,1 2 13 which makes it particularly suited for managing effective and safe teamwork during uncertain, complex and risky situations.13–18 Debriefing is associated with improvements in quality and safety such as more speaking up behaviour and reduced number of adverse advents in surgery and labour and delivery.13 19–21

However, many clinicians and educators report to struggle with leading debriefings.22 23 In spite of the versatile supply of tools for creating and maintaining inviting debriefing conditions,24–26 structuring debriefings7 27–37 and managing challenging debriefing situations,25 38 facilitating debriefings is perceived as difficult.22 23 39 40 As a consequence, engagement in and organisational and educational effectiveness of debriefing may suffer, reducing the capability of healthcare teams to learn and improve patient care.41–45 The latter might be exacerbated by the recently discussed eschewal of trained debriefers in favour of having teams debrief themselves.10 41 42

Targeted faculty development is required to help clinicians and educators to improve their debriefing skills.22 40 46–48 Unfortunately, there is little empirical research systematically investigating what actually happens during debriefing.49 Empirically, the debriefing process is a black box; very few studies have examined actual debriefing conversations and how differences in debriefers’ communication influence participants’ outcomes.10 44 45 50 51 In particular, systematic analysis of interaction among debriefers and participants—how actions of debriefers are related to actions of participants—are rare. As a consequence, there is very limited empirical, actionable knowledge on optimal debriefing facilitation for high-quality reflection.49 To be a meaningful and effective intervention for improving the quality and safety of patient care, debriefing as well as other team reflexivity activities in healthcare52—huddles, morbidity and mortality conferences and interdisciplinary boards among them—empirically derived knowledge of effective processes is required.53–55 To reveal actionable knowledge on debriefing conversations, this research should not be limited to input (eg, experience of debriefers and participants) and contextual factors (eg, use of video) but explore the actual interaction process (eg, debriefer–participant communication).49 56 Addressing these gaps in our current empirical understanding of what makes debriefing conversations effective is important for developing and targeting debriefing faculty development efforts for clinical faculty.46 47 57 Actionable knowledge on how to achieve in-depth reflection in debriefing may help mitigate workload during debriefing,40 enhance debriefing skills and debriefing quality and thus contribute to safe patient care.

The goal of this study was to explore the relationship between what debriefers and participants said and how this relates to participants’ reflection during debriefing. We explored probabilities of selected types of debriefer’ communication (eg, asking an open-ended question such as ‘What was on your mind at that time?’ as opposed to a leading question such as ‘Don’t you think should have spoken up?’) as well as types of participants’ communication (eg, sharing an anecdote) being followed by participants’ verbalisations of reflections (eg, ‘I am thinking that […]’ or ‘My assumption was […]’ or ‘What I realise now is…[…]’). In particular, based on debriefing theory, we hypothesised that using advocacy–inquiry—a combination of feedback (consisting of observation and opinion) and open-ended questions—by debriefers would be associated with participants’ verbalisations of mental models as a particular form of reflection.7 58 59

Method

Study design and inclusion/exclusion criteria

Data collection for this prospective observational, simulation-based, microanalytic interaction analysis study took place at a simulation centre of a large urban academic medical hospital in Europe. Simulation was used as investigational method60 mainly because video-recordings of debriefings were required and participants were more familiar with being video-recorded during simulation-based training than during clinical work. We observed debriefing participants and debriefers while they participated in healthcare team debriefings. Debriefings were conducted during interprofessional simulation-based training for anaesthesia care. Depending on staffing schedules for nurses, registrars and consultants, four to eight persons trained together for a full day. Training took place during work hours and participants received credits. Participants were recruited over invitations for anaesthesia simulation-based training during 4 weeks (ie, 20 days) in the simulation centre. The criterion for inclusion was employment as an anaesthesia care provider (ie, senior consultant anaesthetist, consultant anaesthetist, anaesthesia registrar, anaesthesia nurse and anaesthesia nursing student). Simulation-based training and debriefing were provided by clinical simulation educators with special training in simulation-based education (ie, completed simulation instructor course and regular participation in faculty development courses) and several years of debriefing experience. Each day, the educators welcomed participants and spent approximately 1 hour establishing an inviting and engaging learning atmosphere, providing orientation to the learning objectives and training details as well as familiarisation with the simulation equipment.24 Subsequently, learners alternated participating in and observing their colleagues participating in overall three simulated cases using SimMan3G (Laerdal, Stavanger, Norway). Simulated cases were developed using a structured approach and included induction of anaesthesia for a critical ill patient, respiratory problems during anaesthesia induction and a medication error made by a healthcare provider.61 Cases were video-recorded, and videos were used during subsequent debriefings. Debriefings followed the Debriefing with Good Judgment and TeamGAINS approaches (online supplemental table 1).7 31 The debriefings started immediately after the simulated cases were completed.

Supplemental material

Study ethics

The ethics committee determined this study to be exempt. Prior to data collection, we informed the clinicians attending the scheduled simulation-based training in detail (ie, verbally and by providing written documents) about the planned study and obtained written informed consent. Study participation was voluntary. Participants could attend the training even if they did not wish to participate in the study. All training attendees agreed to participate. Data were collected anonymously, and no inferences from the data about participating clinicians were possible. We did not collect any patient data. Debriefers and participants were informed about the general study objectives (ie, exploring interaction and investigating how to enhance reflection in debriefing) but not about the particular hypothesis.

Measurement

We applied group interaction analysis methodology.62 Using the video-recorded debriefings, one study team member (JCS) and three graduate students observed the debriefing conversation.62 In particular, they applied DE-CODE, a coding scheme for assessing debriefing interactions, to code the debriefing communication.63 64 DE-CODE is a behaviour observation taxonomy that includes 32 codes for debriefers’ communication (table 1) grouped into five categories and 15 codes for participants’ communication (table 2).63 The codes are organised in five categories based on Tobert and Taylor’s four types of speech (ie, framing, advocating, illustrating and inquiring)65 and an additional category other. Although we focused on selected codes in our analysis, it is common practice in interaction analysis to apply coding systems that are exhaustive or logically complete,66 67 which means that the entire stream of interaction is captured and coded. Hence, we also advised our coders to apply the complete DE-CODE scheme such that the entire observed team interaction was accounted for. This has several advantages, including preserving the temporal embeddedness and temporal order of each discrete behaviour within the team interaction stream. A fully coded data set without any temporal gaps in the interaction stream is also an important prerequisite for running lag sequential analysis.67 68

Table 1

Observation taxonomy of debriefers’ communication and frequencies of codes

Table 2

Observation taxonomy of participants’ communication and frequencies of codes

Given our study goal to explore the relationship between what debriefers and participants say and how this relates to participants’ reflection during debriefing, we built on recent reflexivity research and operationalised the latter as verbalised mental models (eg, internal thought processes, schemes, assumptions and values), explanations (ie, analysis why something happened), action plans (ie, plan of future activities) and conclusions (ie, adopting alternative actions) by participants (ie, 4 of the 15 DE-CODE codes for debriefing participants, table 2).16 While these four types of verbalised reflections served as target behaviours for most analyses, based on debriefing theory, the hypothesis was tested exclusively for mental models.

To achieve meaningful results in predicting the four target behaviours, we could not use all 32 DE-CODE codes for debriefer communication (table 1) to manage the risk of alpha-error cumulation.69 That is, we needed to reduce the number of predicting DE-CODE codes.

First, we included debriefers’ observations, opinions and open-ended questions in the analysis to test our hypothesis that using advocacy inquiry—the combination of feedback (consisting of observation and opinion) and open-ended questions—by debriefers would be associated with participants’ verbalisations of mental models. Given the considerable number of available DE-CODE codes for questions, we combined all open-ended inquiry codes (ie, ‘emotions’, ‘behaviour’, ‘cognitions’, ‘circular’, ‘ideas or solutions’ and ‘clarification’) into one single code open-ended question. This newly built open-ended question code excluded other DE-CODE questions that were either not open-ended (ie, realism, leading, inquiry; table 1), exam-like (ie, knowledge) or very specific (ie, conclusion).

Second, we focused on a few additional preceding debriefers’ codes: leading questions, anecdotes, paraphrasing and appreciation, which we selected based on debriefing and team meeting theory and on our own debriefing experience.7 25 31 35 58 59 70

Third, as a debriefing conversation includes conversation among participants, we also explored how participants may trigger reflection among themselves. Previous research supports the idea that team learning is embedded in dynamic temporal team interactions, pointing to patterns of reflection among the participants in our setting.71 Specifically, we explored the preceding functions of participants’ anecdotes, descriptions, explanations, mental models, conclusions and evaluations of own actions.44 63 We selected these codes based on previous research in the area of problem-solving interactions. Across various team contexts such as student groups, regular workplace meetings and teacher teams, research evidence suggests that team members can facilitate their own reflection and learning during dynamic team interactions by pursuing a detailed problem analysis, by elaborating and building on each other’s ideas and by converging in their mental models through communication dynamics.72–75

A standard PC and INTERACT (Mangold International, Arnsdorf, Germany) were used for behaviour coding. INTERACT is a specialised software for behaviour observation and coding.76 It allows for uploading videos and assigning time-stamped, predefined codes to video sequences (online supplementary figure 1). These video codes may later be compiled and analysed (online supplementary figure 2).76

Coding procedure

To allow for exploring whether certain communicative actions by debriefers as well as participants were significantly more often followed by verbalisations of learners’ reflections, we prepared the debriefing data in a way that enabled further frequency and sequential analysis.77 For that purpose, the three trained coders applied a method called timed, event-based coding.78 Rather than recording or transcribing what was said and done for the duration of the entire debriefing, using INTERACT coding software, they assigned and documented the predefined DE-CODE codes to the observed interaction.64 To allow for subsequent analysis of behaviour sequences, the coders preserved the temporal order of communicative statements by defining the onset and offset of speech events (ie, mostly sentences) to which a code of DE-CODE could be assigned. Subsequently, they applied the specific codes for debriefers (ie, table 1) and participants (ie, table 2). To determine inter-rater reliability, every fourth video-recorded debriefing was coded separately by two coders.79

Coder training

A member of our study team (JCS), who was a psychologist with a track record in behaviour observation methodology in the clinical and simulated environment and familiar with the DE-CODE, trained three psychology graduate students in applying DE-CODE codes via event-based behaviour coding with INTERACT coding software: (1) she discussed with them relevant literature on simulation-based training, debriefing and behaviour coding to familiarise them with the overall approach. (2) The coders observed simulation-based trainings sessions including debriefings. (3) JCS familiarised coders with DE-CODE and INTERACT softwares. (4) Coders watched and discussed one video-recorded debriefing with JCS. (5) Independently and relying on the DE-CODE coding manual, coders annotated three video recordings of debriefings to which JCS provided detailed, written feedback. (6) Coders and JCS discussed any discrepancies. Coders were not aware of the specific study objectives and hypothesis to avoid confirmation bias.80 However, they were given information on the general study objective (ie, to explore debriefing communication).

Statistical analysis

Inter-rater reliability was calculated using Cohen’s kappa (κ).81 Using SPSS V.22.0 (IBM SPSS Statistics for Windows, Armonk, New York, USA), we calculated κ for the occurrence versus non-occurrence of each code for every 1 min segment of the coded period.79 82 To identify statistically meaningful behavioural patterns in debriefing communication, we applied quantitative team interaction analysis and used lag sequential analysis implemented in INTERACT.68 Lag sequential analysis considers the entire stream of interaction in annotated observational data and allows for identifying whether a particular sequence of behaviours (eg, a verbalised mental model by a participant followed by a paraphrasing statement by a debriefer) occurs above or beyond chance.77 It builds on the assumption that during communication each communicative action is probabilistically determined by preceding statements. For means of analysis, an interaction sequence matrix is required, which reveals transition frequencies among communicative actions.77 In this study, we focused on first-order transitions, where one communicative action directly follows the previous one (lag1). As is customary when running lag sequential analysis to identify emergent patterns in interaction data, we pooled our data across different debriefings to form one large data pool for analysis.83–85 This procedure considers the entire stream of interaction as one time series and examines behavioural sequences wherever they may be located within this stream. The benefit of this approach is that it can pinpoint systematic behavioural linkages, regardless of the specific debriefing meeting and regardless of whether a behavioural sequence occurs earlier or later in the interaction and as long as behaviours follow one another at the specified analytical lag (in our case, from one behaviour to the immediate next behaviour). Hence, we tested for significant behavioural linkages at lag1 across the entire data set. We focused on the sequence of behaviours between debriefers and participants in general (rather than focusing on specific individuals in our data set). In other words, all debriefers were pooled together, and all participants were pooled together for analysis purposes. To determine whether a target behaviour following a given behaviour occurred more or less often than expected by chance, we calculated adjusted residuals.68 Adjusted residuals are standardised raw residuals based on the difference between the observed and expected transition frequencies.68 At an alpha level of 5%, any z values larger than 1.65 or smaller than −1.65 imply that a behavioural sequence has occurred above or below chance, respectively.68 83 86 However, as INTERACT does not allow for alpha corrections in multiple comparisons, we applied a stricter alpha level of 1% as cut-off in all analyses: any z values larger than 2.58 or smaller than −2.58 imply that a behavioural sequence occurred above or below chance, respectively.87

Results

Participants and descriptive data

Overall, 8 debriefers and 114 debriefing participants participated in 50 debriefings. Debriefings included a minimum of four participants and a maximum of eight participants (3 (2.6%) female and 13 (11.4 %) male consultant anaesthetists, 27 (23.7%) female and 23 (20.2%) male anaesthesia registrars, 32 (28.1%) female and 16 (14.0%) male anaesthesia nurses). Two debriefers were female and six were male; two were female psychologists, two male nurses, three male attending physicians and one male resident physician. The 50 debriefings had a mean duration of 49.35 min (SD=8.89 min). They included 18 486 behavioural transitions. Frequencies of debriefers’ (κ=0.74) and participants’ communication (κ=0.68) are shown in tables 1 and 2, respectively.

Associations of debriefers’ communication with participants’ reflection

We had hypothesised that the use of advocacy–inquiry by debriefers—a combination of feedback (consisting of observation and opinion) and open-ended questions—would be associated with participants’ verbalised mental models.7 We indeed found significant lag1 behavioural linkages from debriefers’ observations to debriefers’ opinions (z=9.85, p<0.001), from opinions to debriefers’ open-ended questions (z=9.52, p<0.001), and from open-ended questions to participants’ mental models (z=7.41, p<0.001), supporting this hypothesis (online supplementary figure 3).

We found that debriefers’ open-ended questions triggered participants’ explanations (z=7.12; p<0.001,) whereas their appreciations inhibited participants’ explanations (z=7.12; z=−3.11; p<0.001, respectively; see figure 1A). Debriefers’ open-ended questions (z=7.41, p<0.001), leading questions (z=2.65, p=0.004) and paraphrasing (z=5.83, p<0.001) all triggered participants’ mental models, whereas debriefers’ appreciations inhibited participants’ mental models (z=−3.37, p<0.001; see figure 1B). Debriefers’ requests for conclusion triggered participants’ conclusions (figure 1C; z=15.95, p<0.001) as well as participants’ action plans (figure 1D; z=9.63, p<0.001).

Figure 1

Lag sequential analysis results for debriefers’ open-ended questions, inquiry for conclusion, leading questions, paraphrasing, appreciation and anecdote followed by participants’ (A) explanations, (B) mental models, (C) conclusions and (D) action plans. Sequences with z-values above 2.58 or below −2.58, respectively, are defined as significant. *P<0.01; **p<0.001.

Associations of participants’ communication with participants’ reflection

Lag sequential analysis also helped us identify communicative patterns among debriefing participants. We found that participants’ explanations (figure 2A) were triggered by participants’ descriptions (z=7.73), explanations (z=4.69), mental models (z=5.47) and evaluations of own actions (z=5.36; p<0.001, respectively). Participants’ mental models (figure 2B) were triggered by participants’ anecdotes (z=10.98), descriptions (z=4.8), explanations (z=4.32) and mental models (z=4.02; p<0.001, respectively). Participants’ conclusions (figure 2C) were triggered by prior participants’ conclusion (z=4.53, p<0.001) and by evaluations of own actions (z=3.71, p<0.001). Finally, participants’ action plans (figure 2D) were evoked by participants’ conclusions (z=4.09, p<0.001).

Figure 2

Lag sequential analysis results for participants’ anecdotes, descriptions, explanations, mental models, conclusions and evaluations of own performance followed by participants’ (A) explanations,(B) mental models, (C) conclusions, and (D) action plans. Sequences with z-values above 2.58 or below −2.58, respectively, are defined as significant. *P<0.01; **p<0.001.

Discussion

Debriefings help teams learn quickly and treat patients safely. However, many clinicians and educators report to struggle with leading debriefings. Little empirical knowledge on optimal debriefing processes is available. This study addresses recent calls for a more dynamic, in-depth analysis of team debriefings.3 16 63 Our goal was to understand how reflection emerges in debriefing, which is important for developing and targeting debriefing faculty development efforts for clinical faculty, mitigate workload during debriefing and enhancing debriefing skills and quality, and thus contribute to safe patient care.40 46 47 57

Specifically, we aimed to identify specific types of communication regarding their potential to enhance participants’ reflection during debriefing conversations. Using group interaction analysis methodology,62 we found that both debriefers and participants expressed a broad range of communicative actions demonstrating the complexity of the debriefing process. Our findings show the underlying communicative mechanisms for triggering reflection in debriefing:

First, we found significant lag1 behavioural linkages from debriefers’ observation to debriefers’ opinion, from opinion to debriefers’ open-ended question, and from open-ended question to participants’ shared mental model (supplementary figure 3). This finding supports the assumption that debriefers help learners to reflect on their actions by means of the advocacy–inquiry approach.7 Of note, these observed behavioural linkages at lag1 do not necessarily imply longer chains of behaviours such as observations followed by opinions followed by open-ended questions, which would require a lag3 sequential analysis. Rather, our findings at lag1 (from one behaviour to the immediate next behaviour within the interaction stream) should be interpreted only at lag1. Second, debriefers triggered participants’ explanations and verbalised mental models by asking open-ended questions. Sharing mental models was also triggered by paraphrasing. Surprisingly, leading questions triggered reflections as well. In contrast, appreciations by debriefers seemed to inhibit the immediate sharing of mental models. Participants voiced conclusions and action plans particularly when debriefers specifically asked for it. Third, our analysis reveals that participants can also trigger reflection among themselves, specifically for mental models evoking more mental models as well as sharing personal anecdotes evoking mental models.

Our findings extend previous research by: (a) unpacking the black box of the debriefing process, (B) uncovering what actually happens in debriefings and (C) revealing techniques and actionable behavioural strategies that help participants reflect. In what follows we discuss the theoretical and practical implications of these findings, point to study limitations and highlight future research avenues.

In line with previous reasoning, we found empirical support for the use of advocacy–inquiry.7 31 59 88 89 When debriefers acted as ‘conversational scientists’7 by articulating their feedback as observation and opinion and combining it with open-ended questions, participants responded with verbalised mental models significantly more often than expected by chance and compared with when debriefers’ used leading questions with predetermined answers. A possible explanation for this finding is that sharing own’s view as a debriefer reveals honesty and that opening one’s view to challenge may increase mutuality and, thus, reflection.7 Interestingly, leading questions and paraphrasing triggered participants’ reflections as well. One might argue that when using leading questions and paraphrasing, debriefers to some degree impose their point of view on participants or express in their own words what they heard.63 However, since teams tend to talk about task-specific issues rather than teamwork,42 both debriefers’ leading question and debriefers’ paraphrasing may serve as an anchor and enable participants to verbalise their mental models of teamwork. That is, when debriefers offer understanding by paraphrasing and even assumptions by asking leading questions, this might facilitate reflecting for participants by complementing and correcting what they had heard. Interestingly, in our data, debriefers’ stand-alone appreciations without asking any follow-up questions inhibited participants’ immediate reflection. Appreciating participants in debriefings may reflect a common feedback dilemma of simulation educators: providing clear and honest feedback on task performance without damaging their relationship with learners.58 90 91 Although appreciation is most certainly crucial for acknowledgement, connection and motivation,92 93 it has a long history of ‘misuse’, in particular by the prominent feedback model, which is commonly known as ‘sandwich feedback’ suggesting that negative feedback is best packed between layers of appreciations—a model that implies that feedback is mostly negative and that lacks empirical evidence.94–96 Feedback receivers learn through this ‘sugarcoating’92 that there is an overarching ‘but’ and ignore the appreciations because they have experienced that these are just means to end and that the actual (negative) feedback is the ingredient of the sandwich.97 However, appreciation may have different conversation functions in a debriefing. For example, in combination with previewing, it may serve as a transitional strategy to move from one topic to another (eg, ‘Great insights! Now that we have discussed the challenges of speaking up, let’s talk about how leadership can facilitate speaking up’).

Furthermore, our results demonstrate that debriefing participants trigger reflection among themselves with two separate mechanisms: autocontingencies and storytelling. In terms of autocontingencies, we found that sharing reflections is somewhat contagious: participants encourage each other to express reflections.98 Autocontingencies have also been found in previous research on team meetings83 84 99: statements by team members stimulate other team members to respond with feedback that is consistent with the displayed statement, leading to conversational spirals such as complaining or solution cycles.98 Furthermore, our finding that storytelling by participants (ie, talking about experiences or personal anecdotes) triggered participant reflection is in line with research demonstrating the powerful and far-reaching effects of anecdotes in teaching and human communication.93 100 101

Our results have implications for debriefing faculty development. First, faculty development should focus on few but effective debriefing behaviours: advocacy–inquiry, open-ended questions, paraphrasing participants’ statements and asking for conclusions and specific action plans. This may also help managing the considerable cognitive load during debriefing.40 Second, when immediate reflection of participants is the desired objective, our lag1 results suggest that debriefers should refrain from using stand-alone appreciation without asking any follow-up questions; instead they may use advocacy–inquiry for in-depth exploration of desired behaviours. Third, debriefers are advised to allow participants to engage and remain in reflective patterns by initiating participants’ reflection and anecdotes, paraphrasing and listening. Fourth, successful reflection does not only hinge exclusively on debriefers; rather, participants themselves have the ability to maintain reflection during debriefings, which can be fostered through training as well as through raising debriefers’ awareness of this team potential.

This study has limitations. First, as it was a single-site study in one Western centre in a simulation-based training setting, generalisation of the results across clinical contexts, organisations and cultures requires further research. In particular, clinical debriefings vary in many aspects due to the organisational, logistical and cultural surroundings.39 Although debriefings in the clinical and simulation-based training setting share a common goal: learning and reflection,16 102 and we explored how to enhance reflection, future research should investigate whether the results of this study generalise to different types of hospitals, cultural settings, courses and even debriefing approaches.103 Second, we used video recordings of debriefings. Video-recording might have had an effect on the social interactions of the debriefing participants.104 However, since simulated cases were also video-recorded and videos were used during subsequent debriefings, we assume that participants were familiar with being video-recorded. In addition, it seemed that participants were involved in the debriefings without being affected by being video-recorded because they shared their mental models. If video-recording had had an influence on them, their communicative actions would likely have remained more superficial, for example, descriptions instead of reflections. Third, this study contains relatively long debriefings in the simulated setting. Duration of debriefings in the clinical setting varies between a few minutes and 30 min.39 Future research should examine whether our results hold when including short(er) debriefings in the simulation-based training and the clinical setting. Since team interactions are dynamic and may change over time, future research could also explore if and how these interaction patterns develop over time.105 Fourth, in our data analysis, we focused on short-term lag1 behaviour transitions, that is, immediate behaviour sequences. Analysis with a longer time span might result in different results. For example, while leading questions may ‘work’ in the short term to elicit mental models, they may lead to shame, guilt and defensiveness in the longer term.91 106 Fifth, our lag1 analysis included only selected behaviours and does not allow for conclusions about behaviours that we coded but did not include in the analysis, for example, framing codes such ‘previewing’ and ‘structuring’. Based on debriefing theory and our own experience as debriefers, we assume these are useful, but we cannot draw conclusions based on our study.38 57 107 Sixth, our studied debriefer sample included seasoned debriefers; results might be different for novice debriefers. Finally, team debriefings are more and more conducted virtually.108–110 Research is required to investigate how reflection patterns might differ in face-to-face versus virtual debriefing.

In sum, our study sheds light on what happens during team debriefings. It highlights the fine-grained social dynamics in debriefs, identifies functional and dysfunctional communicative actions and provides insights into specific behavioural strategies for debriefers in order to facilitate reflection in team debriefing in healthcare. We hope that our findings will stimulate further behaviour interaction research on team debriefings in the interest of mutual learning to improve the quality and safety of patient care.

Data availability statement

Data are available on reasonable request. Data are available from the corresponding author on reasonable request.

Ethics statements

Patient consent for publication

Ethics approval

This study involves human participants but Kantonale Ethik-Kommission ZürichKEK-ZH-Nr. 2013-0592 exempted this study. Participants gave informed consent to participate in the study before taking part.

Acknowledgments

The authors would like to thank Hubert Heckel, Adrian Marty, Niels Buse, Axel Knauth and Michael Hanusch for their help in collecting data, Lynn Häsler, Sarah Kriech and Rebecca Hasler for their help in data coding and Alfons Scherrer and Andrea Nef for their operational support.

References

Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

Footnotes

  • Twitter @Mi_Minka

  • Contributors MK and JCS designed the study. MK, BG and JCS collected the data. MK, NL-W and JCS analysed the data. MK wrote the first draft of the manuscript. All authors approved the final manuscript. MK acted as guarantor.

  • Funding This research was funded by a grant from the Swiss National Science Foundation (grant no. 100014_152822).

  • Competing interests The authors declare the following conflicts of interest: Michaela Kolbe, Bastian Grande and Julia C Seelandt are faculty at the Simulation Centre of the University Hospital Zurich, providing debriefing faculty development training. Michaela Kolbe is faculty for the Debriefing Academy, which runs debriefing courses for healthcare professionals. NL-W has no conflicts of interest.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

Linked Articles