Elsevier

Nurse Education in Practice

Volume 11, Issue 6, November 2011, Pages 380-383
Nurse Education in Practice

Examining the impact of high and medium fidelity simulation experiences on nursing students’ knowledge acquisition

https://doi.org/10.1016/j.nepr.2011.03.014Get rights and content

Abstract

Aim

This paper describes a study that measured and compared knowledge acquisition in nursing students exposed to medium or high fidelity human patient simulation manikins.

Background

In Australia and internationally the use of simulated learning environments has escalated. Simulation requires a significant investment of time and money and in a period of economic rationalisation this investment must be justified. Assessment of knowledge acquisition with multiple choice questions is the most common approach used to determine the effectiveness of simulation experiences.

Method

This study was conducted in an Australian school of nursing; 84 third year nursing students participated. A quasi-experimental design was used to evaluate the effect of the level of manikin fidelity on knowledge acquisition. Data were collected at three points in time: prior to the simulation, immediately following and two weeks later.

Results

Differences in mean scores between the control (medium fidelity) and experimental (high fidelity) groups for Tests 1, 2 and 3 were calculated using independent t tests and were not statistically significant. Analysis of covariance (ANCOVA) was conducted to determine whether changes in knowledge scores occurred over time and, while an improvement in scores was observed, it was not statistically significant.

Conclusion

The results of this study raise questions about the value of investing in expensive simulation modalities when the increased costs associated with high fidelity manikins may not be justified by a concomitant increase learning outcomes. This study also suggests that multiple choice questions may not be the most appropriate measure of simulation effectiveness.

Introduction

“What’s wrong with me? I can’t get my breath. Can’t you do something?” The students respond, some hesitant at first, others take immediate action. “Sit him up” one says … “and get some oxygen … quickly!” “I still can’t breathe” their patient gasps. “Now what?” Some students stand back, overwhelmed, unsure. One wants to make a MET call; another begins a focused assessment…

One group of students after another completes the simulation session, followed by a debriefing. All agree that the experience has been valuable; they have learnt “so much” and feel “more confident now”. The educators hope that this experience will have a lasting impact on the students’ learning.

Few academics who have observed or been involved in a simulation session would disagree that this teaching and learning approach appears to be effective. However, research is needed to validate such interventions and to add to the tacit knowledge that underpins nursing education strategies (Ferguson and Day, 2005). In addition, simulation experiences require a significant investment of time and money and in a time of economic rationalisation this investment must be justified. Assessment of knowledge acquisition with multiple choice questions (MCQs) is the most common approach used to determine the effectiveness of simulation sessions (Laschinger et al., 2008). Pre-test–post-test multiple MCQs are convenient to administer and relatively uncomplicated to analyse. In the United States test scores are a measure of particular interest to educators as licensure is awarded based on successfully answering a number of MCQs (Kardong-Edgren et al., 2009). However, educators are sometimes surprised and disappointed by results of pre-tests–post-tests that use MCQs to evaluate the effectiveness of simulations; and to date the results have been variable and inconclusive (Alinier et al., 2006, Linden, 2008, Birch et al., 2007, Cant and Cooper, 2009, Hoadley, 2009, Lapkin et al., 2010, Scherer et al., 2007). In addition, many studies have shown a deterioration in knowledge following the simulation experience, suggesting the need for repetition and reinforcement of students’ learning (Bruce et al., 2009, Kardong-Edgren et al., 2009).

In this paper we profile a study that measured and compared knowledge acquisition in third year nursing students exposed to medium or high fidelity human patient simulation manikins (HPSMs).5

The findings from the study raise questions about the value of investing in expensive simulation modalities when the increased costs associated with high fidelity manikins may not be justified by an increase learning outcomes. We also discuss the appropriateness of using MCQs to evaluate the effectiveness of simulation experiences, with appropriateness referring to the extent to which an intervention fits with or is apt in a situation; and effectiveness referring to the extent to which an intervention, when used appropriately, achieves the intended effect (Joanna Briggs Institute, 2008).

Section snippets

Background

Simulation is defined as a technique used to “replace or amplify real experiences with guided experiences that evoke or replace substantial aspects of the real world in a fully interactive manner” (Gaba, 2007, p.126). The use of simulation to reproduce life-like experiences to enhance the education of healthcare professionals has developed at an unprecedented pace. There are many claims made about the benefits of simulation; for example, these experiences are said to:

  • Provide opportunities for

Research design and sample

The study profiled in this paper was conducted in an Australian school of nursing that offers a bachelor of nursing program across three campuses. In 2009, following ethics approval, third year nursing students (N = 203) were informed about the study by advertisements placed on Blackboard™, a web-based platform, and invited to participate by undertaking a simulated learning experience. An information statement was provided and students were asked to sign a consent form prior to participating.

A

Comparison of mean knowledge scores between high and medium fidelity

Mean pre-test knowledge scores (Test 1) for the control group (medium fidelity) and the experimental group (high fidelity) were 11.833 and 12.523 respectively. An independent t-test indicated no statistically significant difference in these scores, t(82) = −1.233, p > 0.05; this ensured a relatively equal starting point for the study. Mean knowledge scores for Test 2 were 11.763 and 12.667 for the control group and experimental group respectively. The differences in these scores were not

Discussion

In this study knowledge acquisition scores were not influenced by manikin fidelity. This raises questions about the value of investing in expensive simulation modalities when the increased costs associated with high fidelity manikins may not be justified by a concomitant increase learning outcomes. While these results should be factored into decision-making by those investing in simulated learning environments, they do need to be considered with a degree of caution as the study also raised

Conclusion

In this study nursing students’ knowledge acquisition scores were not improved by exposure to either medium of high fidelity HPSMs. This result is supported by a number of previous studies. The equivocal nature of these results suggests that assessment of knowledge acquisition using MCQs, although relatively convenient, may not be the most appropriate approach to measure the effectiveness of simulation experiences. We suggest that evaluation methods should be more closely aligned with the

Acknowledgements

Support for this project was provided by the Australian Learning and Teaching Council Ltd, an initiative of the Australian Government Department of Education, Employment and Workplace Relations. The views expressed in this paper do not necessarily reflect the views of the Australian Learning and Teaching Council.

The authors also wish to acknowledge the other members of the Australian Learning and Teaching Council project team: Dr Sharon Bourgeois, Dr Jennifer Dempsey, Dr Sharyn Hunter, Dr Sarah

References (29)

  • S.A. Bruce et al.

    A collaborative exercise between graduate and undergraduate nursing students

    Nursing Education Perspectives

    (2009)
  • R. Cant et al.

    Simulation-based learning in nurse education: systematic review

    Journal of Advanced Nursing

    (2009)
  • S. Comer

    Patient care simulations: role playing to enhance clinical understanding

    Nursing Education Perspectives

    (2005)
  • V.L. Elfrink et al.

    Using learning outcomes to inform teaching practices in human patient simulation

    Nursing Education Perspectives

    (2010)
  • Cited by (57)

    • Basic Life Support Training for undergraduate nursing students: An integrative review

      2021, Nurse Education in Practice
      Citation Excerpt :

      Acquisition of BLS knowledge was explored in all studies (see supplementary table 3). Instructional approaches included: lectures (Aqel and Ahmad, 2014; Bonacaro et al., 2014; Hernández-Padilla et al., 2015; Madden, 2006); CD/DVD based instruction (Ackermann, 2009; Bonacaro et al., 2014; Mardegan et al., 2015); simulation (Akhu-Zaheya et al., 2013; Chen et al., 2018; Levett-Jones et al., 2011) and clinical practicum (Roh et al., 2016). Findings demonstrated that despite attending IL formats and completing skills assessment, BLS knowledge declined rapidly in weeks with significant loss demonstrated at 12 weeks (Ackermann, 2009; Bonacaro et al., 2014; Leighton and Scholl, 2009).

    View all citing articles on Scopus
    1

    Tel.: +61 02 4921 6599.

    2

    Tel.: +61 02 4349 4533; fax: +61 02 4921 6301.

    3

    Tel.: +61 02 4021 6339; fax: +61 02 4921 6301.

    4

    Tel.: +61 02 4921 6230; fax: +61 02 4921 6301.

    View full text