Measures of interrater agreement

J Thorac Oncol. 2011 Jan;6(1):6-7. doi: 10.1097/JTO.0b013e318200f983.

Abstract

Kappa statistics is used for the assessment of agreement between two or more raters when the measurement scale is categorical. In this short summary, we discuss and interpret the key features of the kappa statistics, the impact of prevalence on the kappa statistics, and its utility in clinical research. We also introduce the weighted kappa when the outcome is ordinal and the intraclass correlation to assess agreement in an event the data are measured on a continuous scale.

Publication types

  • Review

MeSH terms

  • Humans
  • Models, Statistical*
  • Neoplasms / classification
  • Neoplasms / pathology*
  • Observer Variation*
  • Pathology, Clinical
  • Statistics as Topic / methods*