Five ways to look at Cohen's kappa
MJ Warrens - Journal of Psychology & Psychotherapy, 2015 - research.rug.nl
The kappa statistic is commonly used for quantifying inter-rater agreement on a nominal
scale. In this review article we discuss five interpretations of this popular coefficient. Kappa is …
scale. In this review article we discuss five interpretations of this popular coefficient. Kappa is …
Note on Cohen's kappa
TO Kvålseth - Psychological reports, 1989 - journals.sagepub.com
Cohen's Kappa is a measure of the over-all agreement between two raters classifying items
into a given set of categories. This communication describes a simple computational method …
into a given set of categories. This communication describes a simple computational method …
[HTML][HTML] Kappa coefficient: a popular measure of rater agreement
In mental health and psychosocial studies it is often necessary to report on the between-
rater agreement of measures used in the study. This paper discusses the concept of …
rater agreement of measures used in the study. This paper discusses the concept of …
A new interpretation of the weighted kappa coefficients
S Vanbelle - Psychometrika, 2016 - Springer
Reliability and agreement studies are of paramount importance. They do contribute to the
quality of studies by providing information about the amount of error inherent to any …
quality of studies by providing information about the amount of error inherent to any …
Learning how to differ: agreement and reliability statistics in psychiatry
LS David - The Canadian Journal of Psychiatry, 1995 - journals.sagepub.com
Whenever two or more raters evaluate a patient or student, it may be necessary to determine
the degree to which they assign the same label or rating to the subject. The major problem in …
the degree to which they assign the same label or rating to the subject. The major problem in …
Interrater Agreement Measures: Comments on Kappan, Cohen's Kappa, Scott's π, and Aickin's α
LM Hsu, R Field - Understanding Statistics, 2003 - Taylor & Francis
The Cohen (1960) kappa interrater agreement coefficient has been criticized for penalizing
raters (eg, diagnosticians) for their a priori agreement about the base rates of categories (eg …
raters (eg, diagnosticians) for their a priori agreement about the base rates of categories (eg …
Comparison of the null distributions of weighted kappa and the C ordinal statistic
DV Cicchetti, JL Fleiss - Applied Psychological Measurement, 1977 - journals.sagepub.com
It frequently occurs in psychological research that an investigator is interested in assessing
the ex tent of interrater agreement when the data are measured on an ordinal scale. This …
the ex tent of interrater agreement when the data are measured on an ordinal scale. This …
Beyond kappa: A review of interrater agreement measures
M Banerjee, M Capozzoli… - Canadian journal of …, 1999 - Wiley Online Library
In 1960, Cohen introduced the kappa coefficient to measure chance‐corrected nominal
scale agreement between two raters. Since then, numerous extensions and generalizations …
scale agreement between two raters. Since then, numerous extensions and generalizations …
[PDF][PDF] Kappa testi
S Kılıç - Journal of mood disorders, 2015 - academia.edu
Kappa coefficient is a statistic which measures inter-rater agreement for categorical items. It
is generally thought to be a more robust measure than simple percent agreement …
is generally thought to be a more robust measure than simple percent agreement …
[PDF][PDF] Kappa statistic is not satisfactory for assessing the extent of agreement between raters
K Gwet - Statistical methods for inter-rater reliability assessment, 2002 - agreestat.com
Evaluating the extent of agreement between 2 or between several raters is common in
social, behavioral and medical sciences. The objective of this paper is to provide a detailed …
social, behavioral and medical sciences. The objective of this paper is to provide a detailed …