Publicación:
Enhancing topic-detection in computerized assessments of constructed responses with distributional models of language

dc.contributor.authorOlmos, Ricardo
dc.contributor.authorLeón, José A.
dc.contributor.authorMartínez Huertas, José Ángel
dc.date.accessioned2024-05-20T11:48:38Z
dc.date.available2024-05-20T11:48:38Z
dc.date.issued2021-12-15
dc.description.abstractUsually, computerized assessments of constructed responses use a predictive-centered approach instead of a validity-centered one. Here, we compared the convergent and discriminant validity of two computerized assessment methods designed to detect semantic topics in constructed responses: Inbuilt Rubric (IR) and Partial Contents Similarity (PCS). While both methods are distributional models of language and use the same Latent Semantic Analysis (LSA) prior knowledge, they produce different semantic representations. PCS evaluates constructed responses using non-meaningful semantic dimensions (this method is the standard LSA assessment of constructed responses), but IR endows original LSA semantic space coordinates with meaning. In the present study, 255 undergraduate and high school students were allocated one of three texts and were tasked to make a summary. A topic- detection task was conducted comparing IR and PCS methods. Evidence from convergent and discriminant validity was found in favor of the IR method for topic-detection in computerized constructed response assessments. In this line, the multicollinearity of PCS method was larger than the one of IR method, which means that the former is less capable of discriminating between related concepts or meanings. Moreover, the semantic representations of both methods were qualitatively different, that is, they evaluated different concepts or meanings. The implications of these automated assessment methods are also discussed. First, the meaningful coordinates of the Inbuilt Rubric method can accommodate expert rubrics for computerized assessments of constructed responses improving computer-assisted language learning. Second, they can provide high-quality computerized feedback accurately detecting topics in other educational constructed response assessments. Thus, it is concluded that: (1) IR method can represent different concepts and contents of a text, simultaneously mapping a considerable variability of contents in constructed responses; (2) IR method semantic representations have a qualitatively different meaning than the LSA ones and present a desirable multicollinearity that promotes the discriminant validity of the scores of distributional models of language; and (3) IR method can extend the performance and the applications of current LSA semantic representations by endowing the dimensions of the semantic space with semantic meanings.en
dc.description.versionversión final
dc.identifier.doihttps://doi.org/10.1016/j.eswa.2021.115621
dc.identifier.issn0957-4174
dc.identifier.urihttps://hdl.handle.net/20.500.14468/12582
dc.journal.titleExpert Systems with Applications
dc.journal.volume185
dc.language.isoen
dc.publisherElsevier
dc.relation.centerFacultad de Psicología
dc.relation.departmentMetodología de las Ciencias del Comportamiento
dc.rightsAtribución-NoComercial-SinDerivadas 4.0 Internacional
dc.rightsinfo:eu-repo/semantics/openAccess
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0
dc.subject.keywordsInbuilt Rubric
dc.subject.keywordsconstructed responses
dc.subject.keywordssummaries
dc.subject.keywordstopic detection
dc.subject.keywordsLatent Semantic Analysis
dc.subject.keywordsAutomated Summary Evaluation
dc.titleEnhancing topic-detection in computerized assessments of constructed responses with distributional models of languagees
dc.typeartículoes
dc.typejournal articleen
dspace.entity.typePublication
relation.isAuthorOfPublicationca510876-0be8-438a-a565-ac5f8953fb78
relation.isAuthorOfPublication.latestForDiscoveryca510876-0be8-438a-a565-ac5f8953fb78
Archivos
Bloque original
Mostrando 1 - 1 de 1
Cargando...
Miniatura
Nombre:
Martinez-Huertas_JA_Enhancingto.pdf
Tamaño:
568.28 KB
Formato:
Adobe Portable Document Format