Ayuda
Ir al contenido

Dialnet


Implementing a general framework for assessing interrater agreement in Stata

  • Autores: Daniel Klein
  • Localización: The Stata journal, ISSN 1536-867X, Vol. 18, Nº. 4, 2018, págs. 871-901
  • Idioma: inglés
  • Enlaces
  • Resumen
    • Despite its well-known weaknesses, researchers continuously choose the kappa coefficient (Cohen, 1960, Educational and Psychological Measurement 20: 37–46; Fleiss, 1971, Psychological Bulletin 76: 378–382) to quantify agreement among raters. Part of kappa’s persistent popularity seems to arise from a lack of available alternative agreement coefficients in statistical software packages such as Stata. In this article, I review Gwet’s (2014, Handbook of Inter-Rater Reliability) recently developed framework of interrater agreement coefficients. This framework extends several agreement coefficients to handle any number of raters, any number of rating categories, any level of measurement, and missing values. I introduce the kappaetc command, which implements this framework in Stata.


Fundación Dialnet

Dialnet Plus

  • Más información sobre Dialnet Plus

Opciones de compartir

Opciones de entorno