RESEARCH ARTICLE


High Agreement and High Prevalence: The Paradox of Cohen’s Kappa



Slavica Zec1, Nicola Soriani1, Rosanna Comoretto2, Ileana Baldi1, *
1 Department of Cardiac, Thoracic and Vascular Sciences, Unit of Biostatistics, Epidemiology and Public Health, University of Padova, Padova, Italy
2 Department of Statistics and quantitative methods, University of Milan, Bicocca, Italy


Article Metrics

CrossRef Citations:
65
Total Statistics:

Full-Text HTML Views: 1654
Abstract HTML Views: 579
PDF Downloads: 397
ePub Downloads: 181
Total Views/Downloads: 2811
Unique Statistics:

Full-Text HTML Views: 998
Abstract HTML Views: 340
PDF Downloads: 337
ePub Downloads: 145
Total Views/Downloads: 1820



Creative Commons License
© 2017 Zec et al.

open-access license: This is an open access article distributed under the terms of the Creative Commons Attribution 4.0 International Public License (CC-BY 4.0), a copy of which is available at: https://creativecommons.org/licenses/by/4.0/legalcode. This license permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

* Address correspondence to this authors at the Department of Cardiac, Thoracic and Vascular Sciences, University of Padova, Via Loredan, 18, 35131 Padova, Italia; Tel: +390498275403; E-mail: ileana.baldi@unipd.it


Abstract

Background:

Cohen's Kappa is the most used agreement statistic in literature. However, under certain conditions, it is affected by a paradox which returns biased estimates of the statistic itself.

Objective:

The aim of the study is to provide sufficient information which allows the reader to make an informed choice of the correct agreement measure, by underlining some optimal properties of Gwet’s AC1 in comparison to Cohen’s Kappa, using a real data example.

Method:

During the process of literature review, we have asked a panel of three evaluators to come up with a judgment on the quality of 57 randomized controlled trials assigning a score to each trial using the Jadad scale. The quality was evaluated according to the following dimensions: adopted design, randomization unit, type of primary endpoint. With respect to each of the above described features, the agreement between the three evaluators has been calculated using Cohen’s Kappa statistic and Gwet’s AC1 statistic and, finally, the values have been compared with the observed agreement.

Results:

The values of the Cohen’s Kappa statistic would lead to believe that the agreement levels for the variables Unit, Design and Primary Endpoints are totally unsatisfactory. The AC1 statistic, on the contrary, shows plausible values which are in line with the respective values of the observed concordance.

Conclusion:

We conclude that it would always be appropriate to adopt the AC1 statistic, thus bypassing any risk of incurring the paradox and drawing wrong conclusions about the results of agreement analysis.

Keywords: Agreement statistics, Cohen's Kappa, Gwet’s AC1, Concordance analysis, Inter-rater agreement, Quality assessment of RCT.