Publicación: Escribir para aprender: evaluación automática de respuestas abiertas con G-Rubric
Cargando...
Fecha
2018-05
Editor/a
Director/a
Tutor/a
Coordinador/a
Prologuista
Revisor/a
Ilustrador/a
Derechos de acceso
info:eu-repo/semantics/openAccess
Título de la revista
ISSN de la revista
Título del volumen
Editor
Universidad Nacional de Educación a Distancia (España). Instituto Universitario de Educación a Distancia (IUED)
Resumen
El incremento de la demanda de formación en línea junto con los recortes experimentados en los últimos años han contribuido a empobrecer el feedback que reciben los estudiantes y a concentrar la evaluación en pruebas objetivas. "Escribir para aprender" es un método que impulsa el desarrollo del pensamiento crítico, la capacidad de síntesis y de análisis. Lo cual está en la base de otras metodologías más complejas como el ABP, pero utilizar el escribir para aprender como herramienta de aprendizaje requiere dar feedback manual. Para hacer posible la utilización del "escribir para aprender" y poder facilitar el feedback requerido en una asignatura con muchos estudiantes, el equipo docente de Historia Económica ha comenzado a utilizar una herramienta tecnológica desarrollada en la UNED, por el departamento de Psicología Evolutiva y de la Educación. Dicha herramienta está basada en la utilización de técnicas de Análisis Semántico Latente. Esta herramienta es capaz de facilitar feedback cuando responden a preguntas de respuesta abierta. Esto permite al estudiante mejorar su respuesta de manera iterativa.
The increasing demand for higher education and life-long training has induced a raising supply of online courses provided both by distance education institutions and conventional face to face universities. Simultaneously, public universities’ budgets have been experiencing serious cuts, at least in Europe. Due to this shortage of human and material resources, large online courses usually face great challenges to provide quality formative assessment, specially the kind that offers rich and personalized feedback. Peer to peer assessment could partially address the problem, but involves its own shortcomings. The act of writing has been identified as a high-impact learning tool across disciplines, and competence in writing has been shown to aid in access to higher education and retention. Writing to learn (WTL) is also a way to foster critical thinking and a suitable method to train soft skills such as analysis and synthesis abilities. These skills are the base for other complex learning methodologies such as PBL, case method, etc. WTL approach requires a regular feedback given by dedicated lecturers. Consistent assessing of free-text answers is more difficult than we usually assume, specially, when addressing large or massive courses. Using multiple choice “objective” assessment appears an obvious alternative. However, the authors feel that this alternative shows serious shortcomings when aiming to produce outcomes based on written expression and complex analysis. To face this dilemma, the authors decided to test an LSA-based automatic assessment tool developed by researchers of Developmental and Educational Psychology Department at UNED (Spanish National Distance Education University) named G-Rubric. The experience was launched in 2015-2016. By using GRubric, we provided automated formative and iterative feedback to our students for their open-ended questions (70-200 words). This allowed our students to improve their answers and practice writing skills, thus contributing both to better organize concepts and to build knowledge. In this paper, we present the encouraging results of our first experience with UNED Business Degree students in 2015/16.
The increasing demand for higher education and life-long training has induced a raising supply of online courses provided both by distance education institutions and conventional face to face universities. Simultaneously, public universities’ budgets have been experiencing serious cuts, at least in Europe. Due to this shortage of human and material resources, large online courses usually face great challenges to provide quality formative assessment, specially the kind that offers rich and personalized feedback. Peer to peer assessment could partially address the problem, but involves its own shortcomings. The act of writing has been identified as a high-impact learning tool across disciplines, and competence in writing has been shown to aid in access to higher education and retention. Writing to learn (WTL) is also a way to foster critical thinking and a suitable method to train soft skills such as analysis and synthesis abilities. These skills are the base for other complex learning methodologies such as PBL, case method, etc. WTL approach requires a regular feedback given by dedicated lecturers. Consistent assessing of free-text answers is more difficult than we usually assume, specially, when addressing large or massive courses. Using multiple choice “objective” assessment appears an obvious alternative. However, the authors feel that this alternative shows serious shortcomings when aiming to produce outcomes based on written expression and complex analysis. To face this dilemma, the authors decided to test an LSA-based automatic assessment tool developed by researchers of Developmental and Educational Psychology Department at UNED (Spanish National Distance Education University) named G-Rubric. The experience was launched in 2015-2016. By using GRubric, we provided automated formative and iterative feedback to our students for their open-ended questions (70-200 words). This allowed our students to improve their answers and practice writing skills, thus contributing both to better organize concepts and to build knowledge. In this paper, we present the encouraging results of our first experience with UNED Business Degree students in 2015/16.
Descripción
Categorías UNESCO
Palabras clave
escribir para aprender, feedback enriquecido, corrección automática de respuestas abiertas, desarrollo competencias transversales, writing to learn, rich feedback, automathic feedback open-ended questions, transferable skills development
Citación
Centro
no procede
Departamento
No procede