Exact one-sided confidence limits for Cohen's kappa as a measurement of agreement
G Shan, W Wang - Statistical methods in medical research, 2017 - journals.sagepub.com
Cohen's kappa coefficient, κ, is a statistical measure of inter-rater agreement or inter-
annotator agreement for qualitative items. In this paper, we focus on interval estimation of κ …
annotator agreement for qualitative items. In this paper, we focus on interval estimation of κ …
Interval estimation for Cohen's kappa as a measure of agreement
NJM Blackman, JJ Koval - Statistics in medicine, 2000 - Wiley Online Library
Cohen's kappa statistic is a very well known measure of agreement between two raters with
respect to a dichotomous outcome. Several expressions for its asymptotic variance have …
respect to a dichotomous outcome. Several expressions for its asymptotic variance have …
Ridit and exponential type scores for estimating the kappa statistic
Cohen's kappa coefficient is a commonly used method for estimating interrater agreement
for nominal and/or ordinal data; thus agreement is adjusted for that expected by chance. The …
for nominal and/or ordinal data; thus agreement is adjusted for that expected by chance. The …
Weighted least‐squares approach for comparing correlated kappa
HX Barnhart, JM Williamson - Biometrics, 2002 - Wiley Online Library
In the medical sciences, studies are often designed to assess the agreement between
different raters or different instruments. The kappa coefficient is a popular index of …
different raters or different instruments. The kappa coefficient is a popular index of …
Assessing the inter-rater agreement for ordinal data through weighted indexes
D Marasini, P Quatto… - Statistical methods in …, 2016 - journals.sagepub.com
Assessing the inter-rater agreement between observers, in the case of ordinal variables, is
an important issue in both the statistical theory and biomedical applications. Typically, this …
an important issue in both the statistical theory and biomedical applications. Typically, this …
An Exact Bootstrap Confidence Interval for κ in Small Samples
N Klar, SR Lipsitz, M Parzen… - Journal of the Royal …, 2002 - academic.oup.com
Agreement between a pair of raters for binary outcome data is typically assessed by using
the κ-coefficient. When the total sample size is small to moderate, and the proportion of …
the κ-coefficient. When the total sample size is small to moderate, and the proportion of …
Agreement between raters and groups of raters
S Vanbelle - 2009 - orbi.uliege.be
Agreement between raters on a categorical scale is not only a subject of scientific research
but also a problem frequently encountered in practice. Whenever a new scale is developed …
but also a problem frequently encountered in practice. Whenever a new scale is developed …
A better confidence interval for kappa (κ) on measuring agreement between two raters with binary outcomes
JJ Lee, ZN Tu - Journal of Computational and Graphical Statistics, 1994 - Taylor & Francis
Although the kappa statistic is widely used in measuring interrater agreement, it is known
that the standard confidence interval estimation behaves poorly in small samples and for …
that the standard confidence interval estimation behaves poorly in small samples and for …
Interval estimation under two study designs for kappa with binary classifications
CA Hale, JL Fleiss - Biometrics, 1993 - JSTOR
Cornfield's test-based method of setting a confidence interval on a parameter associated
with a two-by-two contingency table is adapted for use with the measure of agreement …
with a two-by-two contingency table is adapted for use with the measure of agreement …
A note on interval estimation of kappa in a series of 2× 2 tables
KJ Lui, C Kelly - Statistics in medicine, 1999 - Wiley Online Library
When there are confounders in reliability studies, failing to stratify data to account for these
confounding effects may produce a misleading estimate of the interrater agreement. In this …
confounding effects may produce a misleading estimate of the interrater agreement. In this …