A method for screening the quality of hospital care using administrative data: preliminary validation results

QRB Qual Rev Bull. 1992 Nov;18(11):361-71. doi: 10.1016/s0097-5990(16)30557-7.

Abstract

Applying a computerized algorithm to administrative data to help assess the quality of hospital care is intriguing. As Iezzoni and colleagues point out, there are major differences of opinion as to the worth of such efforts. This article significantly advances the state of the art in using administrative data to screen for potential quality-of-care problems. In addition, this work on identifying complications of care goes well beyond the emphasis of many government organizations on hospital mortality rates. One question, however, not raised in the paper is: What is a practical upper limit to the sensitivity and specificity in comparing computerized screen results with the consensus judgments of a group of independent physicians? Advanced statistical techniques (such as bootstrapping) might be used to estimate the stability of consensus judgments by physician groups. When the judgments of two groups of physicians are compared with each other, the resulting sensitivity and specificity will not be .99! In addition, more training of members of the physician panels would probably have increased interrater reliability. While acknowledging this problem, the researchers' detailed analysis of the panel results is intriguing and represents a model for such studies. It is hoped that the authors will follow up on the avenues opened here. Furthermore, what degree of accuracy is necessary to identify facilities with higher-than-expected rates of complications? The authors discuss problems involved in using administrative data to target hospitals and departments for more costly in-depth reviews of quality. It is hoped that the promising findings that are reported here will be validated in other studies. Certainly their algorithms should find a ready audience in insurers and hospitals willing to try them out. Finally, should we expect additional research to lead to improvement in the authors' algorithms? I believe the algorithms will prove difficult to improve upon; but perhaps we should not worry about this. At some point, however, the cost of trying to identify and correct quality problems in "minimally outlier" hospitals will exceed the benefits, particularly given alternative uses for the funds. Might we now be close the the "flat of the curve" in the development of such systems for identification of quality problems? This issue should be discussed much further in future studies.

Publication types

  • Research Support, U.S. Gov't, P.H.S.

MeSH terms

  • Abstracting and Indexing
  • Algorithms*
  • California
  • Chronic Disease / epidemiology
  • Computers*
  • Health Services Research / methods*
  • Hospital Records
  • Hospitals / standards*
  • Humans
  • Patient Discharge*
  • Peer Review
  • Postoperative Complications / epidemiology
  • Quality of Health Care*
  • Reproducibility of Results