Bayesian inference for kappa from single and multiple studies

S Basu, M Banerjee, A Sen - Biometrics, 2000 - academic.oup.com
Cohen's kappa coefficient is a widely popular measure for chance-corrected nominal scale
agreement between two raters. This article describes Bayesian analysis for kappa that can …

Maximum likelihood estimation of the kappa coefficient from models of matched binary responses

MM Shoukri, SW Martin, IUH Mian - Statistics in medicine, 1995 - Wiley Online Library
We present an estimate of the kappa‐coefficient of agreement between two methods of
rating based on matched pairs of binary responses and show that the estimate depends on …

Weighted least‐squares approach for comparing correlated kappa

HX Barnhart, JM Williamson - Biometrics, 2002 - Wiley Online Library
In the medical sciences, studies are often designed to assess the agreement between
different raters or different instruments. The kappa coefficient is a popular index of …

A bootstrap method for comparing correlated kappa coefficients

S Vanbelle, A Albert - Journal of Statistical Computation and …, 2008 - Taylor & Francis
Cohen's kappa coefficient is traditionally used to quantify the degree of agreement between
two raters on a nominal scale. Correlated kappas occur in many settings (eg, repeated …

An estimating equations approach for modelling kappa

N Klar, SR Lipsitz, JG Ibrahim - Biometrical Journal: Journal of …, 2000 - Wiley Online Library
Agreement between raters for binary outcome data is typically assessed using the kappa
coefficient. There has been considerable recent work extending logistic regression to …

An Exact Bootstrap Confidence Interval for κ in Small Samples

N Klar, SR Lipsitz, M Parzen… - Journal of the Royal …, 2002 - academic.oup.com
Agreement between a pair of raters for binary outcome data is typically assessed by using
the κ-coefficient. When the total sample size is small to moderate, and the proportion of …

Statistical inference of agreement coefficient between two raters with binary outcomes

T Ohyama - Communications in Statistics-Theory and Methods, 2020 - Taylor & Francis
Scott's pi and Cohen's kappa are widely used for assessing the degree of agreement
between two raters with binary outcomes. However, many authors have pointed out its …

Exact one-sided confidence limits for Cohen's kappa as a measurement of agreement

G Shan, W Wang - Statistical methods in medical research, 2017 - journals.sagepub.com
Cohen's kappa coefficient, κ, is a statistical measure of inter-rater agreement or inter-
annotator agreement for qualitative items. In this paper, we focus on interval estimation of κ …

A better confidence interval for kappa (κ) on measuring agreement between two raters with binary outcomes

JJ Lee, ZN Tu - Journal of Computational and Graphical Statistics, 1994 - Taylor & Francis
Although the kappa statistic is widely used in measuring interrater agreement, it is known
that the standard confidence interval estimation behaves poorly in small samples and for …

Statistical inference of Gwet's AC1 coefficient for multiple raters and binary outcomes

T Ohyama - Communications in Statistics-Theory and Methods, 2021 - Taylor & Francis
Cohen's kappa and intraclass kappa are widely used for assessing the degree of agreement
between two raters with binary outcomes. However, many authors have pointed out its …