[图书][B] Handbook of inter-rater reliability: The definitive guide to measuring the extent of agreement among raters
KL Gwet - 2014 - books.google.com
The third edition of this book was very well received by researchers working in many
different fields of research. The use of that text also gave these researchers the opportunity …
different fields of research. The use of that text also gave these researchers the opportunity …
[PDF][PDF] A-kappa: a measure of agreement among multiple raters
S Gautam - Journal of Data Science, 2014 - pdfs.semanticscholar.org
Medical data and biomedical studies are often imbalanced with a majority of observations
coming from healthy or normal subjects. In the presence of such imbalances, agreement …
coming from healthy or normal subjects. In the presence of such imbalances, agreement …
A novel graphical evaluation of agreement
J Kim, JH Lee - BMC Medical Research Methodology, 2022 - Springer
Abstract Background The Bland-Altman plot with the limits of agreement has been widely
used as an absolute index for assessing test-retest reliability or reproducibility between two …
used as an absolute index for assessing test-retest reliability or reproducibility between two …
HOW reliable are change‐corrected measures of agreement?
I Guggenmoos‐Holzmann - Statistics in Medicine, 1993 - Wiley Online Library
Chance‐corrected measures of agreement are prone to exhibit paradoxical and counter‐
intuitive results when used as measures of reliability. It is demonstrated that these problems …
intuitive results when used as measures of reliability. It is demonstrated that these problems …
Measuring agreement in method comparison studies—a review
PK Choudhary, HN Nagaraja - Advances in ranking and selection …, 2005 - Springer
Assessment of agreement between two or more methods of measurement is of considerable
importance in many areas. In particular, in medicine, new methods or devices that are …
importance in many areas. In particular, in medicine, new methods or devices that are …
Comparison of ICC and CCC for assessing agreement for data without and with replications
CC Chen, HX Barnhart - Computational statistics & data analysis, 2008 - Elsevier
The intraclass correlation coefficient (ICC) has been traditionally used for assessing
reliability between multiple observers for data with or without replications. Definitions of …
reliability between multiple observers for data with or without replications. Definitions of …
A note on interrater agreement
KF Hirji, MH Rosove - Statistics in Medicine, 1990 - Wiley Online Library
We investigate the properties of a measure of interrater agreement originally proposed by
Rogot and Goldberg. 1 Unlike commonly used measures, this measure not only adjusts for …
Rogot and Goldberg. 1 Unlike commonly used measures, this measure not only adjusts for …
Robustness of ‐type coefficients for clinical agreement
A Vanacore, MS Pellegrino - Statistics in Medicine, 2022 - Wiley Online Library
The degree of inter‐rater agreement is usually assessed through κ‐type coefficients and the
extent of agreement is then characterized by comparing the value of the adopted coefficient …
extent of agreement is then characterized by comparing the value of the adopted coefficient …
A study on comparison of generalized kappa statistics in agreement analysis
MS Kim, KJ Song, CM Nam, IK Jung - The Korean Journal of …, 2012 - koreascience.kr
Agreement analysis is conducted to assess reliability among rating results performed
repeatedly on the same subjects by one or more raters. The kappa statistic is commonly …
repeatedly on the same subjects by one or more raters. The kappa statistic is commonly …