Publicación: Speech gestural interpretation by applying word representations in robotics
Fecha
2018-12-03
Autores
Editor/a
Director/a
Tutor/a
Coordinador/a
Prologuista
Revisor/a
Ilustrador/a
Derechos de acceso
info:eu-repo/semantics/openAccess
Título de la revista
ISSN de la revista
Título del volumen
Editor
IOS Press
Resumen
Human-Robot Interaction (HRI) is a growing area of interest in Artificial Intelligence that aims to make interaction with robots more natural. In this sense, numerous research studies on verbal and visual interactions with robots have appeared. The present paper will focus on non-verbal communication and, more specifically, gestures related to speech, which is an open question. With the aim of developing this part of Human-Robot Interaction or HRI, a new architecture is proposed for the assignment of gestures to speech based on the analysis of semantic similarities. In this way, gestures will be intelligently selected using Natural Language Processing (NLP) techniques. The conditions for gesture selection will be determined from an assessment of the effectiveness of different language models in a lexical substitution task applied to gesture annotation. On the basis of this analysis, the aim is to compare models based on expert knowledge and statistical models generated from lexical learning.
Descripción
Categorías UNESCO
Palabras clave
Human-robot interaction, co-verbal gesture, gestural annotation, word representation, robotic speech
Citación
Centro
E.T.S. de Ingeniería Informática
Departamento
Inteligencia Artificial