Elsevier

Ambulatory Pediatrics

Volume 7, Issue 2, 1 March 2007, Pages 182-186
Ambulatory Pediatrics

Original article
Development and Evaluation of High-Fidelity Simulation Case Scenarios for Pediatric Resident Education

https://doi.org/10.1016/j.ambp.2006.12.005Get rights and content

Objective

Pediatric residency programs need objective methods of trainee assessment. Patient simulation can contribute to objective evaluation of acute care event management skills. We describe the development and validation of 4 simulation case scenarios for pediatric resident evaluation.

Methods

We created 4 pediatric simulation cases: apnea, asthma, supraventricular tachycardia, and sepsis. Each case contains a scenario and an unweighted checklist. Case and checklist development began by reaching expert consensus about case content followed by 92 pilot simulation sessions used for content revision and rater training. After development, 54 first-and second-year pediatric residents participated in 108 simulation test cases to assess the validity of data from these tools for our population. We report outcomes for interrater reliability, discriminant validity, and the impact of potential confounding factors on validity estimates.

Results

Interrater reliability (κ) ranged from 0.75 to 0.87. There were statistically and educationally significant differences in summary scores between first-and second-year residents for 3 of the 4 cases. Neither previous simulation exposure nor the order in which the cases were performed were found to be significant factors by multivariate analysis.

Conclusions

Simulation can be used to reliably measure and discriminate resident competencies in acute care management. Rigorous measurement development work is difficult and time-consuming. Done correctly, measurement development yields tangible and lasting benefits for trainees, faculty, and residency programs. Development studies that use systematic procedures and large trainee samples at multiple sites are the best approach to creating measurement tools that yield valid data.

Section snippets

Study Design

This study had a single-group, posttest-only quasi-experimental design.

Setting/Study Participants

We enrolled first and second year pediatric residents from Children’s Memorial Hospital’s (Chicago, Ill) pediatric residency program. During the development phase (July 2003 to June 2004), 51 residents were enrolled. In the evaluation phase (July 2004 to June 2005), 54 residents were enrolled, which represented 100% of the residents. First-year residents starting in 2003 participated in both years of the study. Participation

Results

All residents who were approached agreed to participate. Technical problems caused one session to be rescheduled. Seventy-two percent of subjects were female (78% and 67% of first- and second-year residents, respectively). Fifty-six percent reported previous simulation exposure in medical school (41% and 70% of first- and second-year residents, respectively).

Reliability data are listed in the Table. The mean adjusted κ coefficients for each case range from 0.75 to 0.87, consistent with

Discussion

In this study, we demonstrated that assessing residents with high-fidelity simulation cases can be accomplished with high reliability. Our case content was developed and reviewed by content experts in pediatric emergency medicine and medical education. The residents’ response processes produced by the cases using the simulator in the controlled environment closely approximated the behaviors needed to provide acute pediatric care in emergent situations. For 3 of the 4 cases, we demonstrated that

Acknowledgments

We are grateful for the provision of simulation laboratory time and personnel from the Patient Safety Simulation Center and the Department of Anesthesiology (M. Christine Stock, MD, Chair), Feinberg School of Medicine, Northwestern University. We also thank Leonard D. Wade for his help as simulation laboratory technician.

References (9)

  • G.E. Miller

    The assessment of clinical skills/competence/performance

    Acad Med

    (1990)
  • F.M. Nadel et al.

    Assessing pediatric senior residents’ training in resuscitation: fund of knowledge, technical skills, and perception of confidence

    Pediar Emerg Care

    (2000)
  • D.B. Wayne et al.

    Mastery learning of advanced cardiac life support skills by internal medicine residents using simulation technology and deliberate practice

    J Gen Intern Med

    (2006)
  • S.M. Downing

    Validity: on the meaningful interpretation of assessment data

    Med Educ

    (2003)
There are more references available in the full text version of this article.

Cited by (60)

  • Simulated patient scenario development: A methodological review of validity and reliability reporting

    2020, Nurse Education Today
    Citation Excerpt :

    Only one study piloted the simulated scenario to ensure standardization (Kyaw Tun et al., 2012); others examined videotaped performances of participants or virtual performance of patients as a way to ensure consistency (Park et al., 2010; Tai and Chung, 2008), while another compared expert responses to the simulation (Tsai et al., 2003). Some studies also pilot-tested the simulations either for content revisions and rater training (Adler et al., 2007), or to ensure that steps in the simulation exercises were reliably observable (Lammers et al., 2009). This methodological review yielded 17 records from medicine, paramedicine and nursing.

  • Implementation and outcome evaluation of high-fidelity simulation scenarios to integrate cognitive and psychomotor skills for Korean nursing students

    2015, Nurse Education Today
    Citation Excerpt :

    Guided reflection may have been rated the highest in our study because students were fully able to understand their experiences using a video-assisted verbal debriefing. We randomized students to obtain unbiased results (Adler et al., 2007; Khalaila, 2014), in order to analyze whether a correlation existed between their perception of the simulation design and learning outcomes. The experimental group had significantly higher scores than the control group did for measures of self-confidence such as being able to establish a nursing care plan for a patient with high fever in scenario 1.

  • Teaching and assessment of ethics and professionalism: A survey of pediatric Program directors

    2013, Academic Pediatrics
    Citation Excerpt :

    Reviews of professionalism assessment tools indicate a lack of proper measures, with existing tools not being used consistently and heavy reliance on self-assessment or peer assessment.20–22 The shortage of assessment tools for professional behavior contrasts with other required core qualitative competencies, such as communication and interpersonal skills, where evaluation metrics are more developed.23–25 Despite the lack of formal assessment, one third of program directors in our survey reported having prohibited at least one trainee from graduating or sitting for an examination as a result of unethical or unprofessional behavior.

View all citing articles on Scopus
1

Ms Siddall has served as a paid educator for METI, a simulator manufacturer.

View full text