Original article
Survey of Faculty Perceptions Regarding a Peer Review System

https://doi.org/10.1016/j.jacr.2013.08.011Get rights and content

Purpose

Virtually all radiologists participate in peer review, but to our knowledge, this is the first detailed study of their opinions toward various aspects of the process.

Methods

The study qualified for quality assurance exemption from the institutional review board. A questionnaire sent to all radiology faculty at our institution assessed their views about peer review in general, as well as case selection and scoring, consensus section review for rating and presentation of errors, and impact on radiologist performance.

Results

Of 52 questionnaires sent, 50 were completed (response rate, 96.2%). Of these, 44% agreed that our RADPEER-like system is a waste of time, and 58% believed it is done merely to meet hospital/regulatory requirements. Conversely, 46% agreed that peer review improves radiologist performance, 32% agreed that it decreases medical error, and 42% believed that peer review results are valuable to protect radiologists in cases referred to the medical board. A large majority perform all peer reviews close to the deadline, and substantial minorities frequently or almost always select more than one previous examination for a single medical record number (28%), consciously select “less time intensive” cases (22%), and intentionally avoid cases requiring more time to peer review (30%).

Discussion

Almost one-half of respondents agreed that peer review has value, but as currently performed is a waste of time. The method for selecting cases raises serious questions regarding selection bias. A new approach is needed that stresses education of all radiologists by learning from the mistakes of others.

Introduction

The Joint Commission guidelines [1] state that practitioners are expected to “demonstrate knowledge of established and evolving biomedical, clinical, and social sciences, and the application of their knowledge to patient care and the education of others.” At most institutions, ongoing professional practice evaluation of radiologist performance includes a process of peer review based on a template first described by Donnelly [2]. Peer review should provide an unbiased, fair, and balanced evaluation of radiologist performance to identify opportunities for additional education, error reduction, and self-improvement [3]. Ideally, it should be nonpunitive, have minimal effect on workflow, and allow easy participation [3]. Although one article [4] reported that a “significant percentage” of faculty members viewed peer review as a “time-consuming bureaucratic process to create more paperwork” rather than a means to improve medical care, to our knowledge there has been no detailed study of the opinions of radiologists toward various specific aspects of peer review. Therefore, we undertook a study to assess the views of radiologists at a large urban medical center toward our peer review discrepancy system, which has been mandatory for more than 6 years.

Section snippets

Methods

The institutional review board determined that this study qualified for the quality assurance exemption.

A questionnaire was sent to 52 members of the radiology faculty to determine their views about our local peer review system. Very similar to the ACR's RADPEER™ product, it has been in place for more than 6 years and mandates that each radiologist submit a number of cases equal to 2.5% of each radiologist's prior year's volume (with a maximum of 300 cases). Questions for the survey were

Results

Of 52 questionnaires sent, 50 were completed (response rate, 96.2%). Of the 50 respondents, 44% agreed with the statement that peer review as performed using the RADPEER-like process in our department is a waste of time, 58% thought that peer review is done merely to meet hospital/regulatory requirements, and 42% believed that peer review results are valuable to protect radiologists when there is an issue requiring reporting to the local State Board of Registration in Medicine (Table 1). Also,

Discussion

At most institutions, peer review is commonly used for assessing radiologist performance in terms of medical and clinical knowledge and for remediating any deficiencies that are detected. In our study, however, 44% of respondents agreed with the statement that peer review, as done in our institution using a system similar to RADPEER, is a waste of time; 58% agreed it was performed merely to meet hospital/regulatory requirements; and 46% participate in peer review only because they are forced to

Take-Home Points

  • Almost half of radiologists agreed that peer review improves radiologist performance and is valuable to protect radiologists in cases referred to the Medical Board, while almost one-third agreed that it decreases medical error.

  • Conversely, almost half agreed that our RADPEER-like system is a waste of time, and a majority believe it is done merely to meet hospital and regulatory requirements.

  • The method for selecting cases raises serious questions regarding resulting selection bias.

  • A new approach

Cited by (56)

  • Current State of Peer Learning in Radiology: A Survey of ACR Members

    2023, Journal of the American College of Radiology
  • Transition From Peer Review to Peer Learning: Lessons Learned

    2023, Current Problems in Diagnostic Radiology
View all citing articles on Scopus
View full text