Confidence intervals for the interrater agreement measure kappa

VF Flack - Communications in Statistics-Theory and Methods, 1987 - Taylor & Francis
The asympotic normal approximation to the distribution of the estimated measure [kcirc] for
evaluating agreement between two raters has been shown to perform poorly for small …

A better confidence interval for kappa (κ) on measuring agreement between two raters with binary outcomes

JJ Lee, ZN Tu - Journal of Computational and Graphical Statistics, 1994 - Taylor & Francis
Although the kappa statistic is widely used in measuring interrater agreement, it is known
that the standard confidence interval estimation behaves poorly in small samples and for …

Sample size determinations for the two rater kappa statistic

VF Flack, AA Afifi, PA Lachenbruch, HJA Schouten - Psychometrika, 1988 - Springer
This paper gives a method for determining a sample size that will achieve a prespecified
bound on confidence interval width for the interrater agreement measure, κ. The same …

Beyond kappa: A review of interrater agreement measures

M Banerjee, M Capozzoli… - Canadian journal of …, 1999 - Wiley Online Library
In 1960, Cohen introduced the kappa coefficient to measure chance‐corrected nominal
scale agreement between two raters. Since then, numerous extensions and generalizations …

Estimators of kappa-exact small sample properties

JJ Koval, NJM Blackman - Journal of statistical computation and …, 1996 - Taylor & Francis
Many estimators of the measure of agreement between two raters have been proposed. We
consider four estimators which have been shown to be interpretable as corrected for chance …

Utility of weights for weighted kappa as a measure of interrater agreement on ordinal scale

M Heo - Journal of Modern Applied Statistical Methods, 2008 - jmasm.com
Kappa statistics, unweighted or weighted, are widely used for assessing interrater
agreement. The weights of the weighted kappa statistics in particular are defined in terms of …

On marginal dependencies of the 2× 2 kappa

MJ Warrens - Advances in Statistics, 2014 - Wiley Online Library
Cohen's kappa is a standard tool for the analysis of agreement in a 2× 2 reliability study.
Researchers are frequently only interested in the kappa‐value of a sample. Various authors …

Large sample standard errors of kappa and weighted kappa.

JL Fleiss, J Cohen, BS Everitt - Psychological bulletin, 1969 - psycnet.apa.org
Abstract 2 statistics, kappa and weighted kappa, are available for measuring agreement
between 2 raters on a nominal scale. Formulas for the standard errors of these 2 statistics …

The kappa coefficient of agreement for multiple observers when the number of subjects is small

ST Gross - Biometrics, 1986 - JSTOR
Published results on the use of the kappa coefficient of agreement have traditionally been
concerned with situations where a large number of subjects is classified by a small group of …

Sample size requirements for interval estimation of the kappa statistic for interobserver agreement studies with a binary outcome and multiple raters

A Donner, MA Rotondi - The international journal of biostatistics, 2010 - degruyter.com
Sample size requirements that achieve a prespecified expected lower limit for a confidence
interval about the intraclass kappa statistic are supplied for the case of multiple raters and a …