Almagro Cádiz, MarioPaz López, Félix de laFresno Fernández, Víctor Diego2024-05-202024-05-202018-12-031875-883510.3233/ICA-180585https://hdl.handle.net/20.500.14468/12432Human-Robot Interaction (HRI) is a growing area of interest in Artificial Intelligence that aims to make interaction with robots more natural. In this sense, numerous research studies on verbal and visual interactions with robots have appeared. The present paper will focus on non-verbal communication and, more specifically, gestures related to speech, which is an open question. With the aim of developing this part of Human-Robot Interaction or HRI, a new architecture is proposed for the assignment of gestures to speech based on the analysis of semantic similarities. In this way, gestures will be intelligently selected using Natural Language Processing (NLP) techniques. The conditions for gesture selection will be determined from an assessment of the effectiveness of different language models in a lexical substitution task applied to gesture annotation. On the basis of this analysis, the aim is to compare models based on expert knowledge and statistical models generated from lexical learning.enAtribución-NoComercial-SinDerivadas 4.0 Internacionalinfo:eu-repo/semantics/openAccessSpeech gestural interpretation by applying word representations in roboticsartículoHuman-robot interactionco-verbal gesturegestural annotationword representationrobotic speech