Evaluating the care of general medicine inpatients: how good is implicit review?

Ann Intern Med. 1993 Apr 1;118(7):550-6. doi: 10.7326/0003-4819-118-7-199304010-00010.

Abstract

Objective: Peer review often consists of implicit evaluations by physician reviewers of the quality and appropriateness of care. This study evaluated the ability of implicit review to measure reliably various aspects of care on a general medicine inpatient service.

Design: Retrospective review of patients' charts, using structured implicit review, of a stratified random sample of consecutive admissions to a general medicine ward.

Setting: A university teaching hospital.

Patients: Twelve internists were trained in structured implicit review and reviewed 675 patient admissions (with 20% duplicate reviews for a total of 846 reviews).

Results: Although inter-rater reliabilities for assessments of overall quality of care and preventable deaths (kappa = 0.5) were adequate for aggregate comparisons (for example, comparing mean ratings on two hospital wards), they were inadequate for reliable evaluations of single patients using one or two reviewers. Reviewers' agreement about most focused quality problems (for example, timeliness of diagnostic evaluation and clinical readiness at time of discharge) and about the appropriateness of hospital ancillary resource use was poor (kappa < or = 0.2). For most focused implicit measures, bias due to specific reviewers who were systematically more harsh or lenient (particularly for evaluation of resource-use appropriateness) accounted for much of the variation in reviewers' assessments, but this was not a substantial problem for the measure of overall quality. Reviewers rarely reported being unable to evaluate the quality of care because of deficiencies in documentation in the patient's chart.

Conclusion: For assessment of overall quality and preventable deaths of general medicine inpatients, implicit review by peers had moderate degrees of reliability, but for most other specific aspects of care, physician reviewers could not agree. Implicit review was particularly unreliable at evaluating the appropriateness of hospital resource use and the patient's readiness for discharge, two areas where this type of review is often used.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Health Resources / statistics & numerical data
  • Hospitals, University / standards
  • Hospitals, University / statistics & numerical data
  • Medical Records
  • Medical Staff, Hospital / standards*
  • Observer Variation
  • Patient Admission / statistics & numerical data
  • Peer Review / methods*
  • Practice Patterns, Physicians' / standards
  • Reproducibility of Results