Publicación:
Exploring cognitive models to augment explainability in Deep Knowledge Tracing

dc.contributor.authorLabra, Concha
dc.contributor.authorSantos, Olga C.
dc.contributor.orcidhttps://orcid.org/0009-0004-3499-6106
dc.contributor.orcidhttps://orcid.org/0000-0002-9281-4209
dc.date.accessioned2024-10-08T07:53:11Z
dc.date.available2024-10-08T07:53:11Z
dc.date.issued2023-06-13
dc.descriptionEste artículo fue presentado en la 31 ACM Conference on User Modeling, Adaptation and Personalization (UMAP ’23 Adjunct), June 26–29, 2023, Limassol, Chipre. This paper was presented at the 31st ACM Conference on User Modeling, Adaptation and Personalization (UMAP '23 Adjunct), June 26–29, 2023, Limassol, Cyprus.
dc.description.abstractAdaptive learning systems allow a personalized adaptation based on the characteristics of the student. Tracing the progress of knowledge and skills during the learning process through cognitive models is essential so that these systems can make appropriate decisions when carrying out personalization. This is the objective of Knowledge Tracing, which studies how to infer a cognitive model from the answers given to a sequence of questions or exercises. The incorporation of Deep Learning techniques in this field has given rise to Deep Knowledge Tracing (DKT) which usually has excellent predictive outcomes. The problem is that this increase in accuracy comes with a lack of explainability since Deep Learning models can be considered black boxes from which it is difficult to build interpretations or explanations. By contrast, traditional Knowledge Tracing methods are based on underlying learning models and provide a solid basis for explainability. In this paper we describe an ongoing research to build DKT models with a good trade-off between accuracy and explainability. To this end, we propose to use a loss function based on a mixup approach where the ground truth is a mix between the dataset labels and the predictions of a surrogate explainable model. The approach has potential to improve, not only explainability through the use of the surrogate, but also accuracy thanks to regularization effects. We will validate the approach by exploring, for different cognitive models, the trade-off curve that is obtained by plotting accuracy against explainability for different mixup values.en
dc.description.versionversión final
dc.identifier.citationConcha Labra and Olga C. Santos. 2023. Exploring cognitive models to augment explainability in Deep Knowledge Tracing. In UMAP ’23 Adjunct: Adjunct Proceedings of the 31st ACM Conference on User Modeling, Adaptation and Personalization (UMAP ’23 Adjunct), June 26–29, 2023, Limassol, Cyprus. ACM, New York, NY, USA, 4 pages. https://doi.org/10.1145/3563359.3597384
dc.identifier.doihttps://doi.org/10.1145/3563359.3597384
dc.identifier.issn1550-4840
dc.identifier.urihttps://hdl.handle.net/20.500.14468/23947
dc.journal.titleACM Association for Computing Machinery
dc.language.isoen
dc.publisherACM Digital Library
dc.relation.centerFacultades y escuelas::E.T.S. de Ingeniería Informática
dc.relation.departmentInteligencia Artificial
dc.rightsinfo:eu-repo/semantics/openAccess
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/deed.es
dc.subject12 Matemáticas::1203 Ciencia de los ordenadores ::1203.04 Inteligencia artificial
dc.subject.keywordspersonalized learning systemsen
dc.subject.keywordsscrutable user modelsen
dc.subject.keywordsexplainabilityen
dc.subject.keywordsdeep knowledge tracing (DKT)en
dc.subject.keywordscognitive modelsen
dc.titleExploring cognitive models to augment explainability in Deep Knowledge Tracingen
dc.typeartículoes
dc.typejournal articleen
dspace.entity.typePublication
relation.isAuthorOfPublicationdf3339e5-d482-4ea3-85ad-3a554c2ba075
relation.isAuthorOfPublication.latestForDiscoverydf3339e5-d482-4ea3-85ad-3a554c2ba075
Archivos
Bloque original
Mostrando 1 - 1 de 1
Cargando...
Miniatura
Nombre:
Santos_OlgaC_Exploring-cognitive-models.pdf
Tamaño:
420.61 KB
Formato:
Adobe Portable Document Format
Bloque de licencias
Mostrando 1 - 1 de 1
No hay miniatura disponible
Nombre:
license.txt
Tamaño:
3.62 KB
Formato:
Item-specific license agreed to upon submission
Descripción: