Examining the impact of high and medium fidelity simulation experiences on nursing students’ knowledge acquisition
Introduction
“What’s wrong with me? I can’t get my breath. Can’t you do something?” The students respond, some hesitant at first, others take immediate action. “Sit him up” one says … “and get some oxygen … quickly!” “I still can’t breathe” their patient gasps. “Now what?” Some students stand back, overwhelmed, unsure. One wants to make a MET call; another begins a focused assessment…
One group of students after another completes the simulation session, followed by a debriefing. All agree that the experience has been valuable; they have learnt “so much” and feel “more confident now”. The educators hope that this experience will have a lasting impact on the students’ learning.
Few academics who have observed or been involved in a simulation session would disagree that this teaching and learning approach appears to be effective. However, research is needed to validate such interventions and to add to the tacit knowledge that underpins nursing education strategies (Ferguson and Day, 2005). In addition, simulation experiences require a significant investment of time and money and in a time of economic rationalisation this investment must be justified. Assessment of knowledge acquisition with multiple choice questions (MCQs) is the most common approach used to determine the effectiveness of simulation sessions (Laschinger et al., 2008). Pre-test–post-test multiple MCQs are convenient to administer and relatively uncomplicated to analyse. In the United States test scores are a measure of particular interest to educators as licensure is awarded based on successfully answering a number of MCQs (Kardong-Edgren et al., 2009). However, educators are sometimes surprised and disappointed by results of pre-tests–post-tests that use MCQs to evaluate the effectiveness of simulations; and to date the results have been variable and inconclusive (Alinier et al., 2006, Linden, 2008, Birch et al., 2007, Cant and Cooper, 2009, Hoadley, 2009, Lapkin et al., 2010, Scherer et al., 2007). In addition, many studies have shown a deterioration in knowledge following the simulation experience, suggesting the need for repetition and reinforcement of students’ learning (Bruce et al., 2009, Kardong-Edgren et al., 2009).
In this paper we profile a study that measured and compared knowledge acquisition in third year nursing students exposed to medium or high fidelity human patient simulation manikins (HPSMs).5
The findings from the study raise questions about the value of investing in expensive simulation modalities when the increased costs associated with high fidelity manikins may not be justified by an increase learning outcomes. We also discuss the appropriateness of using MCQs to evaluate the effectiveness of simulation experiences, with appropriateness referring to the extent to which an intervention fits with or is apt in a situation; and effectiveness referring to the extent to which an intervention, when used appropriately, achieves the intended effect (Joanna Briggs Institute, 2008).
Section snippets
Background
Simulation is defined as a technique used to “replace or amplify real experiences with guided experiences that evoke or replace substantial aspects of the real world in a fully interactive manner” (Gaba, 2007, p.126). The use of simulation to reproduce life-like experiences to enhance the education of healthcare professionals has developed at an unprecedented pace. There are many claims made about the benefits of simulation; for example, these experiences are said to:
- •
Provide opportunities for
Research design and sample
The study profiled in this paper was conducted in an Australian school of nursing that offers a bachelor of nursing program across three campuses. In 2009, following ethics approval, third year nursing students (N = 203) were informed about the study by advertisements placed on Blackboard™, a web-based platform, and invited to participate by undertaking a simulated learning experience. An information statement was provided and students were asked to sign a consent form prior to participating.
A
Comparison of mean knowledge scores between high and medium fidelity
Mean pre-test knowledge scores (Test 1) for the control group (medium fidelity) and the experimental group (high fidelity) were 11.833 and 12.523 respectively. An independent t-test indicated no statistically significant difference in these scores, t(82) = −1.233, p > 0.05; this ensured a relatively equal starting point for the study. Mean knowledge scores for Test 2 were 11.763 and 12.667 for the control group and experimental group respectively. The differences in these scores were not
Discussion
In this study knowledge acquisition scores were not influenced by manikin fidelity. This raises questions about the value of investing in expensive simulation modalities when the increased costs associated with high fidelity manikins may not be justified by a concomitant increase learning outcomes. While these results should be factored into decision-making by those investing in simulated learning environments, they do need to be considered with a degree of caution as the study also raised
Conclusion
In this study nursing students’ knowledge acquisition scores were not improved by exposure to either medium of high fidelity HPSMs. This result is supported by a number of previous studies. The equivocal nature of these results suggests that assessment of knowledge acquisition using MCQs, although relatively convenient, may not be the most appropriate approach to measure the effectiveness of simulation experiences. We suggest that evaluation methods should be more closely aligned with the
Acknowledgements
Support for this project was provided by the Australian Learning and Teaching Council Ltd, an initiative of the Australian Government Department of Education, Employment and Workplace Relations. The views expressed in this paper do not necessarily reflect the views of the Australian Learning and Teaching Council.
The authors also wish to acknowledge the other members of the Australian Learning and Teaching Council project team: Dr Sharon Bourgeois, Dr Jennifer Dempsey, Dr Sharyn Hunter, Dr Sarah
References (29)
- et al.
Determining the value of simulation in nurse education: study design and initial results
Nurse Education in Practice
(2004) - et al.
VitalSim versus SimMan: a comparison of BSN student test scores, knowledge retention and satisfaction
Clinical Simulation in Nursing
(2009) - et al.
The frequency of item writing flaws in multiple choice questions used in high stakes nursing assessments
Nurse Education Today
(2006) - et al.
Cognitions associated with nurse performance: a comparison of concurrent and retrospective verbal reports of nurse performance in a simulated task environment
International Journal of Nursing Studies
(2010) - et al.
Effectiveness of intermediate-fidelity simulation training technology in undergraduate nursing education
Journal of Advanced Nursing
(2006) - et al.
Evaluation. The OSCE approach in nursing education
The Canadian Nurse
(2004) - et al.
Obstetric skills drills: evaluation of teaching methods
Nurse Education Today
(2007) Assessing for learning: some dimensions underlying new approaches to educational assessment
The Alberta Journal of Educational Research
(1995)Stating educational objectives in behavioral terms
Nursing Forum
(1975)- et al.
Simulator effects on cognitive skills and confidence levels
Journal of Nursing Education
(2008)
A collaborative exercise between graduate and undergraduate nursing students
Nursing Education Perspectives
Simulation-based learning in nurse education: systematic review
Journal of Advanced Nursing
Patient care simulations: role playing to enhance clinical understanding
Nursing Education Perspectives
Using learning outcomes to inform teaching practices in human patient simulation
Nursing Education Perspectives
Cited by (57)
Prebriefing for high-fidelity simulation in nursing education: A meta-analysis
2022, Nurse Education TodayHigh-fidelity simulation in undergraduate nursing education: A meta-analysis
2022, Nurse Education TodayBasic Life Support Training for undergraduate nursing students: An integrative review
2021, Nurse Education in PracticeCitation Excerpt :Acquisition of BLS knowledge was explored in all studies (see supplementary table 3). Instructional approaches included: lectures (Aqel and Ahmad, 2014; Bonacaro et al., 2014; Hernández-Padilla et al., 2015; Madden, 2006); CD/DVD based instruction (Ackermann, 2009; Bonacaro et al., 2014; Mardegan et al., 2015); simulation (Akhu-Zaheya et al., 2013; Chen et al., 2018; Levett-Jones et al., 2011) and clinical practicum (Roh et al., 2016). Findings demonstrated that despite attending IL formats and completing skills assessment, BLS knowledge declined rapidly in weeks with significant loss demonstrated at 12 weeks (Ackermann, 2009; Bonacaro et al., 2014; Leighton and Scholl, 2009).
- 1
Tel.: +61 02 4921 6599.
- 2
Tel.: +61 02 4349 4533; fax: +61 02 4921 6301.
- 3
Tel.: +61 02 4021 6339; fax: +61 02 4921 6301.
- 4
Tel.: +61 02 4921 6230; fax: +61 02 4921 6301.