A generalized concordance correlation coefficient for continuous and categorical data

TS King, VM Chinchilli - Statistics in medicine, 2001 - Wiley Online Library
This paper discusses a generalized version of the concordance correlation coefficient for
agreement data. The concordance correlation coefficient evaluates the accuracy and …

The effect of collapsing multinomial data when assessing agreement

E Bartfay, A Donner - International journal of epidemiology, 2000 - academic.oup.com
Background In epidemiological studies researchers often depend on proxies to obtain
information when primary subjects are unavailable. However, relatively few studies have …

Beyond Kappa: Estimating inter-rater agreement with nominal classifications

N Bendermacher, P Souren - Journal of Modern …, 2009 - digitalcommons.wayne.edu
Cohen's Kappa and a number of related measures can all be criticized for their definition of
correction for chance agreement. A measure is introduced that derives the corrected …

A review of agreement measure as a subset of association measure between raters

AO Adejumo, C Heumann, H Toutenburg - 2004 - epub.ub.uni-muenchen.de
Agreement can be regarded as a special case of association and not the other way round.
Virtually in all life or social science researches, subjects are being classified into categories …

Modeling kappa for measuring dependent categorical agreement data

JM Williamson, SR Lipsitz, AK Manatunga - Biostatistics, 2000 - academic.oup.com
A method for analysing dependent agreement data with categorical responses is proposed.
A generalized estimating equation approach is developed with two sets of equations. The …

Estimating rater agreement in 2 x 2 tables: Correction for chance and intraclass correlation

NJM Blackman, JJ Koval - Applied Psychological …, 1993 - journals.sagepub.com
Many estimators of the measure of agreement between two dichotomous ratings of a person
have been proposed. The results of Fleiss (1975) are extended, and it is shown that four …

[图书][B] Measures of interobserver agreement and reliability

MM Shoukri - 2003 - taylorfrancis.com
Agreement among at least two evaluators is an issue of prime importance to statisticians,
clinicians, epidemiologists, psychologists, and many other scientists. Measuring …

[HTML][HTML] A comparison of Cohen's Kappa and Gwet's AC1 when calculating inter-rater reliability coefficients: a study conducted with personality disorder samples

N Wongpakaran, T Wongpakaran, D Wedding… - BMC medical research …, 2013 - Springer
Background Rater agreement is important in clinical research, and Cohen's Kappa is a
widely used method for assessing inter-rater reliability; however, there are well documented …

Assessing agreement between raters from the point of coefficients and log-linear models

AE Yilmaz, T Saracbasi - Journal of Data Science, 2017 - airitilibrary.com
In square contingency tables, analysis of agreement between row and column
classifications is of interest. For nominal categories, kappa co-efficient is used to summarize …

Five ways to look at Cohen's kappa

MJ Warrens - Journal of Psychology & Psychotherapy, 2015 - research.rug.nl
The kappa statistic is commonly used for quantifying inter-rater agreement on a nominal
scale. In this review article we discuss five interpretations of this popular coefficient. Kappa is …