The art and science of chart review

Jt Comm J Qual Improv. 2000 Mar;26(3):115-36. doi: 10.1016/s1070-3241(00)26009-4.

Abstract

Background: Explicit chart review was an integral part of an ongoing national cooperative project, "Using Achievable Benchmarks of Care to Improve Quality of Care for Outpatients with Depression," conducted by a large managed care organization (MCO) and an academic medical center. Many investigators overlook the complexities involved in obtaining high-quality data. Given a scarcity of advice in the quality improvement (QI) literature on how to conduct chart review, the process of chart review was examined and specific techniques for improving data quality were proposed.

Methods: The abstraction tool was developed and tested in a prepilot phase; perhaps the greatest problem detected was abstractor assumption and interpretation. The need for a clear distinction between symptoms of depression or anxiety and physician diagnosis of major depression or anxiety disorder also became apparent. In designing the variables for the chart review module, four key aspects were considered: classification, format, definition, and presentation. For example, issues in format include use of free-text versus numeric variables, categoric variables, and medication variables (which can be especially challenging for abstraction projects). Quantitative measures of reliability and validity were used to improve and maintain the quality of chart review data. Measuring reliability and validity offers assistance with development of the chart review tool, continuous maintenance of data quality throughout the production phase of chart review, and final documentation of data quality. For projects that require ongoing abstraction of large numbers of clinical records, data quality may be monitored with control charts and the principles of statistical process control.

Results: The chart review module, which contained 140 variables, was built using MedQuest software, a suite of tools designed for customized data collection. The overall interrater reliability increased from 80% in the prepilot phase to greater than 96% in the final phase (which included three abstractors and 465 unique charts). The mean time per chart was calculated for each abstractor, and the maximum value was 13.7 +/- 13 minutes.

Conclusions: In general, chart review is more difficult than it appears on the surface. It is also project specific, making a "cookbook" approach difficult. Many factors, such as imprecisely worded research questions, vague specification of variables, poorly designed abstraction tools, inappropriate interpretation by abstractors, and poor or missing recording of data in the chart, may compromise data quality.

Publication types

  • Comparative Study
  • Research Support, Non-U.S. Gov't
  • Research Support, U.S. Gov't, P.H.S.

MeSH terms

  • Data Collection
  • Data Interpretation, Statistical
  • Female
  • Humans
  • Male
  • Managed Care Programs
  • Medical Audit / standards*
  • Medical Records / standards
  • Models, Statistical
  • Software