Measures of clinical agreement for nominal and categorical data: The kappa coefficient
References (20)
- et al.
On the methods and theory of reliability
J. Nervous Mental Dis.
(1976) Reliability
Phys. Ther.
(1986)- et al.
Reliability in the clinical setting
Res. Newslett. Am. Phys. Ther. Assoc.
(1991) Measuring nominal scale agreement among many raters
Pychol. Bull.
(1971)- et al.
Large sample variance of kappa in the case of different sets of raters
Psychol. Bull.
(1971) - et al.
The measurement of observer agreement for categorical data
Biometrics
(1977) Kappa coefficient calculation using multiple ratings per subject: a special communication
Phys. Ther.
(1989)A generalized kappa coefficient
Ed. Psychol. Measure.
(1982)- et al.
The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability
Ed. Psychol. Measure.
(1973) Weighted kappa: nominal scale agreement with provision for scaled disagreement or partial credit
Psychol. Bull.
(1968)
There are more references available in the full text version of this article.
Cited by (97)
Divergent nonlinear trends of global drought and its multivariate characteristics
2024, Journal of HydrologyMLATE: Machine learning for predicting cell behavior on cardiac tissue engineering scaffolds
2023, Computers in Biology and MedicineInter-rater reliability of the diagnosis of otitis media based on otoscopic images and wideband tympanometry measurements
2022, International Journal of Pediatric OtorhinolaryngologyApplication of Automated Hand Ultrasound Scanning and a Simplified Three-Joint Scoring System for Assessment of Rheumatoid Arthritis Activity
2021, Ultrasound in Medicine and Biology
Copyright © 1992 Published by Elsevier Ltd.