Publicación: Self-learning robot navigation with deep reinforcement learning techniques
Cargando...
Fecha
2022-09-01
Autores
Editor/a
Director/a
Tutor/a
Coordinador/a
Prologuista
Revisor/a
Ilustrador/a
Derechos de acceso
info:eu-repo/semantics/openAccess
Título de la revista
ISSN de la revista
Título del volumen
Editor
Universidad Nacional de Educación a Distancia (España). Escuela Técnica Superior de Ingeniería Informática. Departamento de Inteligencia Artificia
Resumen
The autonomous driving has been always a challenging task. A high number of sensors mounted in the vehicle analyze the surroundings and provide to the autonomous driving algorithm useful information, such as relative distances from the vehicle to the different obstacles. Some robotic paradigms, like the reactive paradigm, uses this sensorial input to directly create an action linked to the actuators. This makes the reactivate paradigm capable to react to unpredictable scenarios with relatively low computational resources. However, they lack a robot motion planning. This can lead to longer and less comfortable trajectories with respect to the hierarchical/deliberative paradigm, which counts with a motion planning module over a predefined horizon. Although a local optimization of the robot trajectory is now possible under static scenarios, the motion planning module comes at a high cost in terms of memory and computational power. The hybrid paradigm combines the reactive and hierarchical/deliberative paradigms to solve even more complex scenarios, such as dynamic scenarios, but the memory and computational resources needed are still high. This work presents the sense-think-act-learn robotic paradigm which aims to inherit the advantages of the reactive, hierarchical/deliberative and hybrid paradigms at a reasonable computational cost. The proposed methodology makes use of reinforcement learning techniques to learn a policy by trial and error, just like the human brain works. On one hand, there is no motion planning module, so that the computational power can be limited like in the reactive paradigm. But on the other hand, a local planification and optimization of the robot trajectory takes place, like in the hierarchical/deliberative and hybrid paradigms. This planification is based on the experience stored during the learning process. Reactions to sensorial inputs are automatically learnt based on well-defined reward functions, which are directly mapped to the safety, legal, comfort and task-oriented requirements of the autonomous driving problem. Since the motion planification is based on the experience, the algorithm proposed is not bound to any embedded model of the vehicle or environment. Instead, the algorithm learns directly from the environment (real or simulated) and therefore it is not affected by uncertainties of embedded models or estimators which try to reproduce the dynamics of the vehicle or robot. Additionally, the policy is learnt automatically. The state-of-the-art algorithms invert many engineering hours to develop a policy or algorithm to fulfil all given requirements, while the method proposed in this work saves these costs and engineering time. Another interesting advantage of the proposed algorithm is the capability to adapt the logic under unknown scenarios. For that, an online learning process is implemented, but the memory and computational power required for that is high.
Descripción
Categorías UNESCO
Palabras clave
deep reinforcement learning, self-learning, autonomous driving, deep deterministic policy gradient, Q-learning, dynamic environment
Citación
Centro
Facultades y escuelas::E.T.S. de Ingeniería Informática
Departamento
Inteligencia Artificial