Publicación:
On the Significance of Graph Neural Networks With Pretrained Transformers in Content-Based Recommender Systems for Academic Article Classification

dc.contributor.authorLiu, Jiayun
dc.contributor.authorCastillo-Cara, Manuel
dc.contributor.authorGarcía Castro, Raúl
dc.contributor.funderCYTED Ciencia y Tecnología para el Desarrollo and Comunidad de Madrid.
dc.date.accessioned2025-10-14T15:41:24Z
dc.date.available2025-10-14T15:41:24Z
dc.date.issued2025-05-27
dc.descriptionThe registered version of this article, first published in Expert Systems: The Journal of Knowledge Engineering, is available online at the publisher's website: Wiley, https://doi.org/10.1111/exsy.70073
dc.descriptionLa versión registrada de este artículo, publicado por primera vez en Expert Systems: The Journal of Knowledge Engineering, está disponible en línea en el sitio web del editor: Wiley, https://doi.org/10.1111/exsy.70073
dc.descriptionThis work was supported by CYTED Ciencia y Tecnología para el Desarrollo and Comunidad de Madrid.
dc.description.abstractRecommender systems are tools for interacting with large and complex information spaces by providing a personalised view of such spaces, prioritising items that are likely to be of interest to the user. In addition, they serve as a significant tool in academic research, helping authors select the most appropriate journals for their academic articles. This paper presents a comprehensive study of various journal recommender systems, focusing on the synergy of graph neural networks (GNNs) with pretrained transformers for enhanced text classification. Furthermore, we propose a content-based journal recommender system that combines a pretrained Transformer with a Graph Attention Network (GAT) using title, abstract and keywords as input data. The proposed architecture enhances text representation by forming graphs from the Transformers' hidden states and attention matrices, excluding padding tokens. Our findings highlight that this integration improves the accuracy of the journal recommendations and reduces the transformer oversmoothing problem, with RoBERTa outperforming BERT models. Furthermore, excluding padding tokens from graph construction reduces training time by 8%–15%. Furthermore, we offer a publicly available dataset comprising 830,978 articles.en
dc.description.versionversión publicada
dc.identifier.citationLiu, J., Castillo-Cara, M. and García-Castro, R. (2025), On the Significance of Graph Neural Networks With Pretrained Transformers in Content-Based Recommender Systems for Academic Article Classification. Expert Systems, 42: e70073. https://doi.org/10.1111/exsy.70073
dc.identifier.doihttps://doi.org/10.1111/exsy.70073
dc.identifier.issn1468-0394
dc.identifier.urihttps://hdl.handle.net/20.500.14468/30410
dc.journal.issue7
dc.journal.titleExpert Systems: The Journal of Knowledge Engineering
dc.journal.volume42
dc.language.isoen
dc.publisherWiley
dc.relation.centerE.T.S. de Ingeniería Informática
dc.relation.departmentInteligencia Artificial
dc.rightsinfo:eu-repo/semantics/openAccess
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/deed.es
dc.subject1203.04 Inteligencia artificial
dc.subject.keywordsBERTen
dc.subject.keywordsgraph attention networken
dc.subject.keywordsgraph neural networken
dc.subject.keywordsjournal recommendationen
dc.subject.keywordsrecommender systemsen
dc.subject.keywordsRoBERTaen
dc.subject.keywordstransformersen
dc.titleOn the Significance of Graph Neural Networks With Pretrained Transformers in Content-Based Recommender Systems for Academic Article Classificationen
dc.typeartículoes
dc.typejournal articleen
dspace.entity.typePublication
relation.isAuthorOfPublicationc0e39bd2-c0d8-4743-953d-488baf6b977e
relation.isAuthorOfPublication.latestForDiscoveryc0e39bd2-c0d8-4743-953d-488baf6b977e
Archivos
Bloque original
Mostrando 1 - 1 de 1
Cargando...
Miniatura
Nombre:
OnTheSignificance.pdf
Tamaño:
836.8 KB
Formato:
Adobe Portable Document Format
Bloque de licencias
Mostrando 1 - 1 de 1
No hay miniatura disponible
Nombre:
license.txt
Tamaño:
3.62 KB
Formato:
Item-specific license agreed to upon submission
Descripción: