The reliability of peer assessments of quality of care

JAMA. 1992 Feb 19;267(7):958-60.

Abstract

Objective: To critically examine the literature regarding the interreviewer reliability of the standard practice of peer assessment of quality of care.

Data sources: Computerized searches of the English-language literature from 1966 through 1990 using MEDLINE, HEALTHLINE, and SCISEARCH databases were performed to identify studies reporting data on interreviewer agreement of implicit evaluations of patient care episodes.

Study selection: Seventeen studies were identified. Five studies were excluded from this review because of deficiencies in the methods or lack of data on chance-corrected indexes of agreement. DATA EXTRACTION SYNTHESIS: The degree of agreement beyond chance was compared with accepted standards in the 12 remaining studies. Most of these studies found agreement corrected for chance to be in the range regarded as poor, indicating that physician agreement regarding quality of care is only slightly better than the level expected by chance.

Conclusions: Given the magnitude of the resources devoted to quality assurance and the centrality of peer assessment to these efforts, there is a need for a global reexamination of the peer review process. A number of proposals appear to have potential for improving the peer review process including more objective assessment procedures, multiple reviewers, higher standards for reviewers, elimination of systematic reviewer bias, use of outcome judgements, and adoption of practice guidelines.

Publication types

  • Guideline
  • Review

MeSH terms

  • Bias
  • Clinical Protocols
  • Databases, Bibliographic
  • Outcome Assessment, Health Care
  • Peer Review* / methods
  • Peer Review* / standards
  • Quality Assurance, Health Care*