Review
Education
To the point: reviews in medical education—the Objective Structured Clinical Examination

https://doi.org/10.1016/j.ajog.2008.09.878Get rights and content

This article, the eighth in the To the Point Series prepared by the Association of Professors of Gynecology and Obstetrics Undergraduate Medical Education Committee, discusses the effectiveness of the Objective Structured Clinical Examination (OSCE) for assessment of learners' knowledge, skills, and behaviors. The OSCE has also been used for the appraisal of residents and physicians undergoing licensure examinations; herein we focus on its application to undergraduate medical education. We review evidence for best practices and recommendations on effective use of the OSCE and requirements for and challenges to its implementation, including creative ways to design an OSCE program with a limited budget. We discuss its role in providing formative and summative feedback and describe learner performance on the OSCE as the OSCE relates to subsequent testing, including US Medical Licensing Examination step 1. A representative case with assessment used at the authors' medical schools is included.

Section snippets

OSCE use in US medical schools

A recent Liaison Committee for Medical Education survey of 126 accredited US medical schools found that 97 schools used at least 1 OSCE in introductory skills courses and 48 programs used the method in the obstetrics and gynecology clinical clerkship.7 In 82 schools, a comprehensive third- or fourth-year OSCE was given. The majority required a passing grade for graduation. Eighty OSCEs evaluated history taking, 81 evaluated physical examination skills, 80 examined the ability to synthesize

What skills are best assessed by the OSCE?

The OSCE is best for testing clinical, technical, and practical skills as well as demonstrating higher-order skills to accomplish knowledge or skill-based educational objectives.6 However, all competencies recommended by the Accreditation Council for Graduate Medical Education for assessment during clinical clerkships: patient care, medical knowledge, interpersonal and communication skills, professionalism, practice-based learning and improvement, and system-based practice—may be assessed using

Selection of OSCE case content

Among the first steps in establishing content validity in an OSCE examination is development of specific goals and corresponding tasks within simulated clinical scenarios. For a valid test, the teaching and testing goals must match the learning level.3, 13 Complexity of OSCE cases should increase from junior to more senior medical students.

Expert faculty in each specialty may determine the learning objectives and drive the focus of OSCE cases for any given student training level.6 For example,

The cost-effective OSCE

Setting up an OSCE program can be expensive, labor intensive, and administratively cumbersome.15 Fortunately, these challenges can be overcome by thoughtful teamwork between the OSCE director, SP coordinator, and a multispecialty faculty group, with the support and use of available resources at each medical school, obstetrics and gynecology department, or both. Some medical schools provide the resources for a centralized OSCE. Elsewhere, individual departments develop their own OSCEs.

The OSCE

Case writing

Case authors play a key role in the success of an OSCE program. The cornerstone of the OSCE is a realistic and appropriately challenging clinical case. Some cases focus on communication skills and others on factual knowledge, but all require time and effort to write and revise. Typically, interested faculty is recruited by the OSCE director to write and revise cases. A case-writing workshop, taught by the OSCE director or an experienced OSCE faculty member, is helpful to new OSCE faculty and

Case examiners

Examiners participate in the administration of the OSCE by observing the learner and filling in a structured checklist. Depending on the availability of audiovisual equipment, they may be in the examination room or view the examination from a separate room. Faculty briefing before the examination is helpful in situations in which the examiner is not the case author or intimately familiar with the case. By using nonfaculty examiners, including SPs, MD/PhD students, nurse-practitioner students,

Feedback and debriefing

Following the completion of an individual station or the entire OSCE, debriefing provides opportunities for learner feedback. Video recording allows debriefing for individual students after the completion of the OSCE by all examinees. Student self-assessment may be effectively incorporated into the debriefing session. Depending on resources, self-assessment, faculty debriefing for all students or only those with suboptimal performance, or debriefing using trained nonfaculty personnel may be

Testing facility

Testing facilities range from state-of-the-art simulation centers to readily available office examination rooms. Handheld video equipment with a microphone and paper evaluation forms can be used as alternatives to the higher-cost Web-based access to digital recording for documentation and feedback.4 In a comparison study, a Web-based OSCE resulted in similar class performance as a paper OSCE.4 Furthermore, a correlation of student achievement on the Web-based OSCE to subsequent NBME subject

The role of the SP

The SPs are individuals trained to portray patient complaints dependably and realistically and provide consistent verbal and behavioral responses to stimuli from examinees.4 The SPs may be asymptomatic, have stable findings, or be trained to simulate physical findings.18 Because SPs have first-hand experience with the examinee as a clinician, they are uniquely qualified to rate examinees on interpersonal skills. Key characteristics helpful in matching cases to appropriate SPs are summarized in

Examination administration

Time spent in each OSCE station depends on the complexity of the history taking and the clinical skills of the person being tested.6 The time allotted per station may be longer in higher-stakes examinations. A typical testing time line is listed as an example in Table 6.

Improving test performance

A common criticism of the OSCE is its low interrater reliability, with correlation coefficients reported to range from 0.2 to 0.95.18 Constructing checklist items improves reliability to reported correlation coefficients of 0.8 to 0.9.18 A checklist should include a manageable number of easily definable learning issues.6 Inclusion of more than 10-12 checklist items in any of the case areas (history, physical examination, and counseling, if included) is negatively associated with reliability and

Standard setting

The determination of an appropriate standard for minimum examination performance is a complex issue. Two major types of standards have been used. With a criterion-referenced standard, students are judged by the level of subject mastery and should perceive the examination as easy if they know the material well. Criterion-based scoring may include 0 for omitted/incorrect, 1 for partially correct, and 2 for correct items.18 A norm-referenced standard compares students against each other and

Assessment factors influencing performance: student and examiner background

Studies have examined the influence of student background, recent clinical experience, and learning style on OSCE performance. In a prospective sample of first-year students, a well-organized deep learning style related favorably to OSCE performance, but prior clinical experience did not.25 Testing context such as most recent rotation influenced student approach to a case. Students took a broader history after completing more clinical rotations.26 Ethnicity was not associated with overall OSCE

Can OSCE performance predict a physician's future success?

Performance on OSCEs was found to be an excellent predictor of future performance on standardized examinations such as the USMLE step 1, which measures basic science knowledge and ability to interpret data and identify pathologic specimens and clinical problems through the application of scientific principles.31 In another study, 93% of learners who passed the OSCE passed subsequent Canadian medical licensing examinations.10 Conversely, of those who failed the OSCE, only 66% later passed these

Why OSCE? A summary

During undergraduate medical training, the OSCE provides both formative and summative assessment of cognitive and noncognitive skills domains. Formative assessment identifies deficiencies and motivates remediation. Summative assessment pinpoints outstanding or suboptimal ability in particular areas. Globally, the OSCE provides immediate feedback to detect weaknesses within a curriculum, teaching methods, or both. Therefore, it is useful both for assessment and for the enhancement of educational

Acknowledgment

The authors acknowledge Denise A. Bargsten, the OSCE coordinator at the College of Medicine, Mayo Clinic, for background information about this program, especially the information provided in Table 4.

References (33)

  • G.B. Nackman et al.

    Implementation of a novel Web-based objective structured clinical evaluation

    Surgery

    (2006)
  • J.L. Bienstock et al.

    Effect of student ethnicity on interpersonal skills and objective standardized clinical examination scores

    Obstet Gynecol

    (2000)
  • D.G. Kassebaum et al.

    Shortcomings in the evaluation of students' clinical skills and behaviors in medical school

    Acad Med

    (1999)
  • L. Wilkerson et al.

    Assessing physical examination skills of senior medical students: knowing how versus knowing when

    Acad Med

    (2003)
  • R.M. Harden et al.

    Assessment of clinical competence using objective structured examination

    Br Med J

    (1975)
  • G. Adamo

    Simulated and standardized patients in OSCEs: achievements and challenges 1992-2003

    Med Teach

    (2003)
  • P.B. McFaul et al.

    The assessment of clinical competence in obstetrics and gynaecology in two medical schools by an objective structured clinical examination

    Br J Obstet Gynaecol

    (1993)
  • D. Newble

    Techniques for measuring clinical competence: objective structured clinical examinations

    Med Educ

    (2004)
  • B. Barzansky et al.

    Educational programs in US medical schools, 2002-2003

    JAMA

    (2003)
  • Common Program Requirements: General Competencies: Approved by the ACGME Board Feb. 13, 2007 [Internet]. Chicago:...
  • J.G. Frohna et al.

    Development of an evaluation of medical student competence in evidence-based medicine using a computer-based OSCE station

    Teach Learn Med

    (2006)
  • B.E. Mavis et al.

    Between a rock and a hard place: finding a place for the OSCE in medical education

    Med Educ

    (2002)
  • K.M. Mazor et al.

    Assessing professionalism in the context of an objective structured clinical examination: an in-depth study of the rating process

    Med Educ

    (2007)
  • R. Tamblyn et al.

    Physician scores on a national clinical skills examination as predictors of complaints to medical regulatory authorities

    JAMA

    (2007)
  • C. Selby et al.

    Set up and run an objective structured clinical exam

    BMJ

    (1995)
  • C. van der Vleuten

    Validity of final examinations in undergraduate medical training

    BMJ

    (2000)
  • Cited by (51)

    • Assessment of clinical competency among TCM medical students using standardized patients of traditional Chinese medicine: A 5-year prospective randomized study

      2022, Integrative Medicine Research
      Citation Excerpt :

      More than 95% of medical schools use SPs for undergraduate teaching, and 85% of schools use them as part of student assessment procedures.3 Moreover, all clinical competency stations have adopted SPs for the current assessment in the United States Medical Licensing Examination.4 The SP method shows good validity and reliability in cultivating and assessing medical students’ clinical competency, physician-patient communication, and bedside etiquette.5,6

    • A Student Assessment Tool for Standardized Patient Simulations (SAT-SPS): Psychometric analysis

      2018, Nurse Education Today
      Citation Excerpt :

      In these cases, the evaluation process involves a certain degree of subjectivity, and is subject to other variables, such as the psychological state of the student and the assessor, the level of stress that the test involves, as well as levels of fatigue and motivation. Therefore, it is necessary to use instruments that meet a series of requirements to guarantee the quality of the assessment (Casey et al., 2009; Gormley, 2011). The evaluation process entails the operationalization of clinical competence, the definition of observable variables, and the creation of instruments that facilitate the collection of data from these variables in order to distinguish individuals from the construct.

    • Triangulation of Multifactorial Assessment: Bringing Objectivity to Objective Structured Clinical Examination Evaluation

      2018, Clinical Simulation in Nursing
      Citation Excerpt :

      Variable psychometric validation of evaluation tools, vague instructions for student/SP/faculty, along with poorly defined outcomes and a mismatch between the intent of the evaluation and type of data collected are well-documented limitations (Hodges, 2003; Lejonqvist, Eriksson, & Meretoja, 2016; Varkey, Natt, Lesnick, Downing, & Yudkowsky, 2008; Walsh, Bailey, & Koren, 2009; Yudkowsky, Downing, & Popescu, 2008). Interrater reliability challenges can be minimized by adopting a standardized checklist (Casey et al., 2009). A checklist must be closely examined for legitimacy of intent as it can steer the faculty observer to more of an evaluation of thorough skills performance over clinical synthesis in decision making (Yudkowsky et al., 2008).

    • Surgery Clerkship Evaluations Are Insufficient for Clinical Skills Appraisal: The Value of a Medical Student Surgical Objective Structured Clinical Examination

      2017, Journal of Surgical Education
      Citation Excerpt :

      Harden et al.8 first described the OSCE as a method for assessing CS, designed to test a studentʼs communication, decision-making, and clinical assessment abilities in a dynamic fashion. OSCEs examine higher-order skills required to accomplish knowledge or skill-based educational objectives, in a method that evaluates reasoning and application, rather than rote fact retention.9 Testing occurs in simulated scenarios designed to reflect clinical encounters, and scores depend upon achievement of predetermined learning objectives.

    View all citing articles on Scopus

    The views expressed in this article are those of the authors and do not reflect the official policy or position of the US Department of Defense, the US Department of Health and Human Services, or the US government.

    View full text