Publicación:
Reviewing and analyzing peer review Inter-Rater Reliability in a MOOC platform

Fecha
2020-09
Editor/a
Director/a
Tutor/a
Coordinador/a
Prologuista
Revisor/a
Ilustrador/a
Derechos de acceso
info:eu-repo/semantics/openAccess
Título de la revista
ISSN de la revista
Título del volumen
Editor
Elsevier
Proyectos de investigación
Unidades organizativas
Número de la revista
Resumen
Peer assessment activities might be one of the few personalized assessment alternatives to the implementation of auto-graded activities at scale in Massive Open Online Course (MOOC) environments. However, teacher's motivation to implement peer assessment activities in their courses might go beyond the most straightforward goal (i.e., assessment), as peer assessment activities also have other side benefits, such as showing evidence and enhancing the critical thinking, comprehension or writing capabilities of students. However, one of the main drawbacks of implementing peer review activities, especially when the scoring is meant to be used as part of the summative assessment, is that it adds a high degree of uncertainty to the grades. Motivated by this issue, this paper analyses the reliability of all the peer assessment activities performed as part of the MOOC platform of the Spanish University for Distance Education (UNED) UNED-COMA. The following study has analyzed 63 peer assessment activities from the different courses in the platform, and includes a total of 27,745 validated tasks and 93,334 peer reviews. Based on the Krippendorff's alpha statistic, which measures the agreement reached between the reviewers, the results obtained clearly point out the low reliability, and therefore, the low validity of this dataset of peer reviews. We did not find that factors such as the topic of the course, number of raters or number of criteria to be evaluated had a significant effect on reliability. We compare our results with other studies, discuss about the potential implications of this low reliability for summative assessment, and provide some recommendations to maximize the benefit of implementing peer activities in online courses.
Descripción
Este es el manuscrito aceptado del artículo. La versión registrada fue publicada por primera vez Computers & Education, Volume 154, 2020, 103894, ISSN 0360-1315, está disponible en línea en el sitio web del editor:https://doi.org/10.1016/j.compedu.2020.103894. This is the accepted manuscript of the article. The registered version was first published Computers & Education, Volume 154, 2020, 103894, ISSN 0360-1315, is available online at the publisher's website: https://doi.org/10.1016/j.compedu.2020.103894
Categorías UNESCO
Palabras clave
Peer assessment, MOOCs, Krippendorff's alpha, inter-rater reliability (IRR), reliability
Citación
Felix Garcia-Loro, Sergio Martin, José A. Ruipérez-Valiente, Elio Sancristobal, Manuel Castro, Reviewing and analyzing peer review Inter-Rater Reliability in a MOOC platform, Computers & Education, Volume 154, 2020, 103894, ISSN 0360-1315, https://doi.org/10.1016/j.compedu.2020.103894.
Centro
Facultades y escuelas::E.T.S. de Ingenieros Industriales
Departamento
Ingeniería Eléctrica, Electrónica, Control, Telemática y Química Aplicada a la Ingeniería
Grupo de investigación
Grupo de innovación
Programa de doctorado
Cátedra