Publicación:
Self-learning robot navigation with deep reinforcement learning techniques

dc.contributor.authorPintos Gómez de las Heras, Borja
dc.contributor.directorMartínez Tomás, Rafael
dc.contributor.directorCuadra Troncoso, José Manuel
dc.date.accessioned2024-05-20T12:35:16Z
dc.date.available2024-05-20T12:35:16Z
dc.date.issued2022-09-01
dc.description.abstractThe autonomous driving has been always a challenging task. A high number of sensors mounted in the vehicle analyze the surroundings and provide to the autonomous driving algorithm useful information, such as relative distances from the vehicle to the different obstacles. Some robotic paradigms, like the reactive paradigm, uses this sensorial input to directly create an action linked to the actuators. This makes the reactivate paradigm capable to react to unpredictable scenarios with relatively low computational resources. However, they lack a robot motion planning. This can lead to longer and less comfortable trajectories with respect to the hierarchical/deliberative paradigm, which counts with a motion planning module over a predefined horizon. Although a local optimization of the robot trajectory is now possible under static scenarios, the motion planning module comes at a high cost in terms of memory and computational power. The hybrid paradigm combines the reactive and hierarchical/deliberative paradigms to solve even more complex scenarios, such as dynamic scenarios, but the memory and computational resources needed are still high. This work presents the sense-think-act-learn robotic paradigm which aims to inherit the advantages of the reactive, hierarchical/deliberative and hybrid paradigms at a reasonable computational cost. The proposed methodology makes use of reinforcement learning techniques to learn a policy by trial and error, just like the human brain works. On one hand, there is no motion planning module, so that the computational power can be limited like in the reactive paradigm. But on the other hand, a local planification and optimization of the robot trajectory takes place, like in the hierarchical/deliberative and hybrid paradigms. This planification is based on the experience stored during the learning process. Reactions to sensorial inputs are automatically learnt based on well-defined reward functions, which are directly mapped to the safety, legal, comfort and task-oriented requirements of the autonomous driving problem. Since the motion planification is based on the experience, the algorithm proposed is not bound to any embedded model of the vehicle or environment. Instead, the algorithm learns directly from the environment (real or simulated) and therefore it is not affected by uncertainties of embedded models or estimators which try to reproduce the dynamics of the vehicle or robot. Additionally, the policy is learnt automatically. The state-of-the-art algorithms invert many engineering hours to develop a policy or algorithm to fulfil all given requirements, while the method proposed in this work saves these costs and engineering time. Another interesting advantage of the proposed algorithm is the capability to adapt the logic under unknown scenarios. For that, an online learning process is implemented, but the memory and computational power required for that is high.en
dc.description.versionversión final
dc.identifier.urihttps://hdl.handle.net/20.500.14468/14565
dc.language.isoen
dc.publisherUniversidad Nacional de Educación a Distancia (España). Escuela Técnica Superior de Ingeniería Informática. Departamento de Inteligencia Artificia
dc.relation.centerFacultades y escuelas::E.T.S. de Ingeniería Informática
dc.relation.degreeMáster Universitario en I.A. Avanzada: Fundamentos, Métodos y Aplicaciones
dc.relation.departmentInteligencia Artificial
dc.rightsinfo:eu-repo/semantics/openAccess
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/deed.es
dc.subject.keywordsdeep reinforcement learning
dc.subject.keywordsself-learning
dc.subject.keywordsautonomous driving
dc.subject.keywordsdeep deterministic policy gradient
dc.subject.keywordsQ-learning
dc.subject.keywordsdynamic environment
dc.titleSelf-learning robot navigation with deep reinforcement learning techniqueses
dc.typetesis de maestríaes
dc.typemaster thesisen
dspace.entity.typePublication
Archivos
Bloque original
Mostrando 1 - 1 de 1
Cargando...
Miniatura
Nombre:
Pintos_Gomez_delasHeras_Borja_TFM.pdf
Tamaño:
4.28 MB
Formato:
Adobe Portable Document Format