Homogeneity Test of the First-Order Agreement Coefficient in a Stratified Design

M Xu, Z Li, K Mou, KM Shuaib - Entropy, 2023 - mdpi.com
Gwet's first-order agreement coefficient (AC 1) is widely used to assess the agreement
between raters. This paper proposes several asymptotic statistics for a homogeneity test of …

Statistical inference of Gwet's AC1 coefficient for multiple raters and binary outcomes

T Ohyama - Communications in Statistics-Theory and Methods, 2021 - Taylor & Francis
Cohen's kappa and intraclass kappa are widely used for assessing the degree of agreement
between two raters with binary outcomes. However, many authors have pointed out its …

Statistical inference of agreement coefficient between two raters with binary outcomes

T Ohyama - Communications in Statistics-Theory and Methods, 2020 - Taylor & Francis
Scott's pi and Cohen's kappa are widely used for assessing the degree of agreement
between two raters with binary outcomes. However, many authors have pointed out its …

A unified model for continuous and categorical data

L Lin, AS Hedayat, W Wu, L Lin, AS Hedayat… - Statistical Tools for …, 2012 - Springer
In this chapter, we generalize agreement assessment for continuous and categorical data to
cover multiple raters (k≥ 2), and each with multiple readings (m≥ 1) from each of the n …

An ordinal measure of interrater absolute agreement

G Bove, PL Conti, D Marella - arXiv preprint arXiv:1907.09756, 2019 - arxiv.org
A measure of interrater absolute agreement for ordinal scales is proposed capitalizing on
the dispersion index for ordinal variables proposed by Giuseppe Leti. The procedure allows …

Sample size calculation for agreement between two raters with binary endpoints using exact tests

G Shan - Statistical methods in medical research, 2018 - journals.sagepub.com
In an agreement test between two raters with binary endpoints, existing methods for sample
size calculation are always based on asymptotic approaches that use limiting distributions of …

Weighted inter-rater agreement measures for ordinal outcomes

D Tran, A Dolgun, H Demirhan - Communications in Statistics …, 2020 - Taylor & Francis
Estimation of the degree of agreement between different raters is of crucial importance in
medical and social sciences. There are lots of different approaches proposed in the …

Interrater agreement statistics under the two-rater dichotomous-response case with correlated decisions

Z Tian, VM Chinchilli, C Shen, S Zhou - arXiv preprint arXiv:2402.08069, 2024 - arxiv.org
Measurement of the interrater agreement (IRA) is critical in various disciplines. To correct for
potential confounding chance agreement in IRA, Cohen's kappa and many other methods …

Concordance coefficients to measure the agreement among several sets of ranks

J Teles - Journal of Applied Statistics, 2012 - Taylor & Francis
In this paper, two measures of agreement among several sets of ranks, Kendall's
concordance coefficient and top-down concordance coefficient, are reviewed. In order to …

Statistical models for assessing agreement for quantitative data with heterogeneous random raters and replicate measurements

CT Ekstrøm, B Carstensen - The international journal of biostatistics, 2024 - degruyter.com
Agreement between methods for quantitative measurements are typically assessed by
computing limits of agreement between pairs of methods and/or by illustration through Bland …