[图书][B] On weighted kappa and concordance correlation coefficient

WZ Robieson - 1999 - search.proquest.com
Cohen's kappa and weighted kappa are popular statistics of degree of agreement between
two raters on a nominal scale. Lin's concordance correlation coefficient (CCC) is a measure …

Measurement of interobserver disagreement: Correction of Cohen's kappa for negative values

TO Kvålseth - Journal of Probability and Statistics, 2015 - Wiley Online Library
As measures of interobserver agreement for both nominal and ordinal categories, Cohen's
kappa coefficients appear to be the most widely used with simple and meaningful …

Raking kappa: describing potential impact of marginal distributions on measures of agreement

A Agresti, A Ghosh, M Bini - Biometrical Journal, 1995 - Wiley Online Library
Several authors have noted the dependence of kappa measures of inter‐rater agreement on
the marginal distributions of contingency tables displaying the joint ratings. This paper …

Comparing dependent kappa coefficients obtained on multilevel data

S Vanbelle - Biometrical Journal, 2017 - Wiley Online Library
Reliability and agreement are two notions of paramount importance in medical and
behavioral sciences. They provide information about the quality of the measurements. When …

[PDF][PDF] Inter-rater reliability: dependency on trait prevalence and marginal homogeneity

K Gwet - Statistical Methods for Inter-Rater Reliability …, 2002 - Citeseer
Researchers have criticized chance-corrected agreement statistics, particularly the Kappa
statistic, as being very sensitive to raters' classification probabilities (marginal probabilities) …

Central tendency and matched difference approaches for assessing interrater agreement.

MJ Burke, A Cohen, E Doveh… - Journal of Applied …, 2018 - psycnet.apa.org
In Study 1 of this two-part investigation, we present a “central tendency approach” and
procedures for assessing overall interrater agreement across multiple groups. We define …

Multiple‐rater kappas for binary data: Models and interpretation

D Stoyan, A Pommerening, M Hummel… - Biometrical …, 2018 - Wiley Online Library
Interrater agreement on binary measurements with more than two raters is often assessed
using Fleiss' κ, which is known to be difficult to interpret. In situations where the same raters …

A simple method for estimating a regression model for κ between a pair of raters

SR Lipsitz, J Williamson, N Klar… - Journal of the Royal …, 2001 - academic.oup.com
Agreement studies commonly occur in medical research, for example, in the review of X-rays
by radiologists, blood tests by a panel of pathologists and the evaluation of psychopathology …

Assessing inter‐rater reliability when the raters are fixed: two concepts and two estimates

V Rousson - Biometrical journal, 2011 - Wiley Online Library
Intraclass correlation (ICC) is an established tool to assess inter‐rater reliability. In a seminal
paper published in 1979, Shrout and Fleiss considered three statistical models for inter‐rater …

A Simulation Study of Rater Agreement Measures with 2x2 Contingency Tables.

M Ato, JJ López, A Benavente - Psicologica: International Journal of …, 2011 - ERIC
A comparison between six rater agreement measures obtained using three different
approaches was achieved by means of a simulation study. Rater coefficients suggested by …