Publicación: Navegación por centro de áreas utilizando cámara de mapa de profundidad
Cargando...
Fecha
2021-09-01
Autores
Editor/a
Director/a
Tutor/a
Coordinador/a
Prologuista
Revisor/a
Ilustrador/a
Derechos de acceso
Atribución-NoComercial-SinDerivadas 4.0 Internacional
info:eu-repo/semantics/openAccess
info:eu-repo/semantics/openAccess
Título de la revista
ISSN de la revista
Título del volumen
Editor
Universidad Nacional de Educación a Distancia (España). Escuela Técnica Superior de Ingeniería Informática. Departamento de Inteligencia Artificial
Resumen
Partiendo de la Tesis del Doctor Don José Manuel Cuadra Troncoso, MODELADO ADAPTATIVO DEL MEDIO PARA LA NAVEGACIÓN DE ROBOTS AUTÓNOMOS UTILIZANDO ALGORITMOS BASADOS EN EL CENTRO DE ÁREAS, este proyecto trata de desmostrar cómo la utilización de cámaras de mapa de profundidad tiene unas considerables ventajas frente a la medida de rango láser que se utilizó en la referida Tesis. Como se sabe, el láser 2D sólo puede ver objetos en un sólo plano, su plano de barrido. Pues bien, al utilizar cámaras de mapa de profundidad se tiene tantos planos como resolución vertical tenga la cámara. Además, se puede derivar, con esta forma de medir el entorno, la altura de los objetos sobre el suelo dotando al sistema la característica 3D. Como se demostrará, para hacer navegar un robot por debajo de un objeto (un puente, por ejemplo) no es necesario esta característica 3D. Para un “mundo” sobre el suelo no se tiene la necesidad de pasar a un 3D real, es suficiente con lo que se ha denominado acotado-3D. Ni si quiera hará falta calcular la altura. Con descartar toda la información del semiplano superior de visión (menos algunas filas por seguridad, unos centímetros por encima de la cámara) es suficiente. También se detecta objetos sobre el suelo (una piedra, por ejemplo) como se demuestra en el experimento del circuito cerrado Se han realizado todos los experimentos de la Tesis (ver capítulo 9). Se ha modificado el experimento de circuito cerrado para demostrar lo que se ha dicho más arriba. Se ha introducido dos puentes, uno de ellos de dos niveles, y una “piedra” sobre el suelo. Las medidas de las cámaras de profundidad sirven de entrada a la librería libareacenter que se encarga del proceso de construir los polígonos global y de avance y genera internamente la medida del centro de áreas que se utiliza para la navegación del robot. La salida de librería son las velocidades de las ruedas izquierda y derecha del robot del sistema diferencial.
Starting from the Thesis of PhD Don José Manuel Cuadra Troncoso, ADAPTIVE MODELING OF THE MEDIUM FOR THE NAVIGATION OF AUTONOMOUS ROBOTS USING ALGORITHMS BASED IN THE CENTER OF AREAS, this project tries to demonstrate how the use of depth map cameras has considerable advantages compared to laser range measurements that was used in the referred Thesis. As is known, the laser can only see objects in a single plane, its scanning plane. Well, when using depth map cameras we have as many planes as the vertical resolution the camera has. In addition, it is possible to derive, with this way of measuring the environment, the height of the objects above the ground, giving the system the 3D characteristic. As will be demonstrated, this 3D feature is not required to navigate a robot under an object (a bridge, for example). For a "world" on the ground there is no need to go to a real 3D, what has been called acotado-3D, is enough. You won’t even need to calculate the height. With discarding all the information of the upper half plane of vision (minus some rows for security, a few centimeters above the camera) it is enough. Objects on the ground (a stone, for example) are also detected as demonstrated in the closed-loop experiment All the experiments of the Thesis have been carried out (see chapter 9). The closed circuit experiment has been modified to demonstrate what has been said above. Two bridges have been introduced, one of them of two levels, and a "stone" on the ground. The measurements of the depth cameras serve as input to the libareacenter library that is in charge of the process of building the global and advance polygons and internally generates the measure of the center of areas that is used for robot navigation. The library output is the velocity of the left and right wheels of the differential system robot.
Starting from the Thesis of PhD Don José Manuel Cuadra Troncoso, ADAPTIVE MODELING OF THE MEDIUM FOR THE NAVIGATION OF AUTONOMOUS ROBOTS USING ALGORITHMS BASED IN THE CENTER OF AREAS, this project tries to demonstrate how the use of depth map cameras has considerable advantages compared to laser range measurements that was used in the referred Thesis. As is known, the laser can only see objects in a single plane, its scanning plane. Well, when using depth map cameras we have as many planes as the vertical resolution the camera has. In addition, it is possible to derive, with this way of measuring the environment, the height of the objects above the ground, giving the system the 3D characteristic. As will be demonstrated, this 3D feature is not required to navigate a robot under an object (a bridge, for example). For a "world" on the ground there is no need to go to a real 3D, what has been called acotado-3D, is enough. You won’t even need to calculate the height. With discarding all the information of the upper half plane of vision (minus some rows for security, a few centimeters above the camera) it is enough. Objects on the ground (a stone, for example) are also detected as demonstrated in the closed-loop experiment All the experiments of the Thesis have been carried out (see chapter 9). The closed circuit experiment has been modified to demonstrate what has been said above. Two bridges have been introduced, one of them of two levels, and a "stone" on the ground. The measurements of the depth cameras serve as input to the libareacenter library that is in charge of the process of building the global and advance polygons and internally generates the measure of the center of areas that is used for robot navigation. The library output is the velocity of the left and right wheels of the differential system robot.
Descripción
Categorías UNESCO
Palabras clave
Citación
Centro
Facultades y escuelas::E.T.S. de Ingeniería Informática
Departamento
Inteligencia Artificial