Publicación:
Self-Learning Robot Autonomous Navigation with Deep Reinforcement Learning Techniques

dc.contributor.authorPintos Gómez de las Heras, Borja
dc.contributor.authorMartínez Tomás, Rafael
dc.contributor.authorCuadra Troncoso, José Manuel
dc.date.accessioned2024-05-20T11:42:42Z
dc.date.available2024-05-20T11:42:42Z
dc.date.issued2023-12-30
dc.description.abstractComplex and high-computational-cost algorithms are usually the state-of-the-art solution for autonomous driving cases in which non-holonomic robots must be controlled in scenarios with spatial restrictions and interaction with dynamic obstacles while fulfilling at all times safety, comfort, and legal requirements. These highly complex software solutions must cover the high variability of use cases that might appear in traffic conditions, especially when involving scenarios with dynamic obstacles. Reinforcement learning algorithms are seen as a powerful tool in autonomous driving scenarios since the complexity of the algorithm is automatically learned by trial and error with the help of simple reward functions. This paper proposes a methodology to properly define simple reward functions and come up automatically with a complex and successful autonomous driving policy. The proposed methodology has no motion planning module so that the computational power can be limited like in the reactive robotic paradigm. Reactions are learned based on the maximization of the cumulative reward obtained during the learning process. Since the motion is based on the cumulative reward, the proposed algorithm is not bound to any embedded model of the robot and is not being affected by uncertainties of these models or estimators, making it possible to generate trajectories with the consideration of non-holonomic constrains. This paper explains the proposed methodology and discusses the setup of experiments and the results for the validation of the methodology in scenarios with dynamic obstacles. A comparison between the reinforcement learning algorithm and state-of-the-art approaches is also carried out to highlight how the methodology proposed outperforms state-of-the-art algorithms.en
dc.description.versionversión publicada
dc.identifier.doi10.3390/app14010366
dc.identifier.issn2076-3417
dc.identifier.urihttps://hdl.handle.net/20.500.14468/12452
dc.journal.issue1
dc.journal.titleApplied Sciences
dc.journal.volume14
dc.language.isoen
dc.publisherMDPI
dc.relation.centerE.T.S. de Ingeniería Informática
dc.relation.departmentInteligencia Artificial
dc.rightsAtribución 4.0 Internacional
dc.rightsinfo:eu-repo/semantics/openAccess
dc.rights.urihttp://creativecommons.org/licenses/by/4.0
dc.subject.keywordsautonomous robots
dc.subject.keywordsdeep reinforcement learning
dc.subject.keywordsdynamic environment
dc.subject.keywordscomfort driving
dc.subject.keywordsself-learning
dc.titleSelf-Learning Robot Autonomous Navigation with Deep Reinforcement Learning Techniqueses
dc.typeartículoes
dc.typejournal articleen
dspace.entity.typePublication
relation.isAuthorOfPublicationc87ba267-e907-4b5f-ad5f-319c1cb3d3cd
relation.isAuthorOfPublicationfa1a8e74-c3ef-42cf-b132-ac98810c1b92
relation.isAuthorOfPublication.latestForDiscoveryc87ba267-e907-4b5f-ad5f-319c1cb3d3cd
Archivos
Bloque original
Mostrando 1 - 1 de 1
Cargando...
Miniatura
Nombre:
Martinez_Tomas_Rafael_SelfLearning_Robot_Auto.pdf
Tamaño:
5.63 MB
Formato:
Adobe Portable Document Format