Examinando por Departamento "Inteligencia Artificial"
Mostrando 1 - 20 de 300
Resultados por página
Opciones de ordenación
Publicación A 3-D Simulation of a Single-Sided Linear Induction Motor with Transverse and Longitudinal Magnetic Flux(MDPI, 2020) Domínguez Hernández, Juan Antonio; Duro Carralero, Natividad; Gaudioso Vázquez, Elena; https://orcid.org/0000-0002-6437-5878This paper presents a novel and improved configuration of a single-sided linear induction motor. The geometry of the motor has been modified to be able to operate with a mixed magnetic flux configuration and with a new configuration of paths for the eddy currents induced inside the aluminum plate. To this end, two slots of dielectric have been introduced into the aluminum layer of the moving part with a dimension of 1 mm, an iron yoke into the primary part, and lastly, the width of the transversal slots has been optimized. Specifically, in the enhanced motor, there are two magnetic fluxes inside the motor that circulate across two different planes: a longitudinal magnetic flux which goes along the direction of the movement and a transversal magnetic flux which is closed through a perpendicular plane with respect to that direction. With this new configuration, the motor achieves a great increment of the thrust force without increasing the electrical supply. In addition, the proposed model creates a new spatial configuration of the eddy currents and an improvement of the main magnetic circuit. These novelties are relevant because they represent a great improvement in the efficiency of the linear induction motor for low velocities at a very low cost. All simulations have been made with the finite elements method—3D, both in standstill conditions and in motion in order to obtain the characteristic curves of the main forces developed by the linear induction motor.Publicación A Bayesian Graphical Model for Frequency Recovery of Periodic Variable Stars(Universidad Nacional de Educación a Distancia (España). Escuela Técnica Superior de Ingeniería Informática. Departamento de Inteligencia Artificial., 2014-02-27) Delgado-Ureña Poirier, Héctor; Sarro Baro, Luis ManuelThis thesis has been developed in the context of the recently launched European Space Agency’s Gaia mission. The thesis has addressed the problem of determining the probability distributions of the real physical parameters for a variable star population, given their recovered values by the Data Processing and Analysis Consortium (DPAC) from the telemetry of the satellite. These recovered values are affected by a number of stochastic errors and systematic biases due to the aliasing phenomenon as a product of the Gaia scanning law, the optical and photometric resolution of the satellite and the algorithms used in the recovery process. The purpose of the thesis has been to model the data recovery process and infer the real distributions for the frequencies, apparent Gmagnitudes and amplitudes for a Large Magellanic Cloud (LMC) classic Cepheid star population. A two level Bayesian graphical model was constructed with the aid of a domain expert to model the recovery process and a Markov chain Monte Carlo (MCMC) algorithm specified to perform the inference. The system was implemented in the declarative BUGS language. The system was trained from a set of recovered data from an artificially generated real distribution of LMC Cepheids. The system was tested by comparing the parameters of the artificially generated real distributions with the distributions inferred by the MCMC algorithm. The results obtained have shown that the system remove successfully the systematic biases and is able to infer correctly the real frequency distribution. The results have also shown a correct inference for the real apparent magnitudes in the G band. Nevertheless, the results obtained for the case of the real amplitude distribution have not allowed to establish significant conclusions.Publicación A block-based model for monitoring of human activity(Elsevier, 2011-03) Folgado Zuñiga, Encarnación; Carmona, Enrique J.; Rincón Zamorano, Mariano; Bachiller Mayoral, MargaritaThe study of human activity is applicable to a large number of science and technology fields, such as surveillance, biomechanics or sports applications. This article presents BB6-HM, a block-based human model for real-time monitoring of a large number of visual events and states related to human activity analysis, which can be used as components of a library to describe more complex activities in such important areas as surveillance, for example, luggage at airports, clients’ behaviour in banks and patients in hospitals. BB6-HM is inspired by the proportionality rules commonly used in Visual Arts, i.e., for dividing the human silhouette into six rectangles of the same height. The major advantage of this proposal is that analysis of the human can be easily broken down into regions, so that we can obtain information of activities. The computational load is very low, so it is possible to define a very fast implementation. Finally, this model has been applied to build classifiers for the detection of primitive events and visual attributes using heuristic rules and machine learning techniques.Publicación A Deep Neural Network For Describing Breast Ultrasound Images in Natural Language(Universidad Nacional de Educación a Distancia (España). Escuela Técnica Superior de Ingeniería Informática. Departamento de Inteligencia Artificial, 2022-09-01) Carrilero Mardones, Mikel; Nogales Moyano, Alberto; Pérez Martín, Jorge; Díez Vegas, Francisco JavierEl cáncer de mama es el tipo de cáncer más común y la principal causa de mortalidad en la población femenina. Sin embargo, su detección temprana puede incrementar la tasa de supervivencia relativa a cinco años del 29% al 99%. La ecografía es una de las técnicas más utilizadas para el diagnóstico de cáncer de mama, pero es necesario un experto para interpretar sus resultados de forma correcta. Esto no es común en algunos países que no cuentan con un programa de cribado apropiado, suponiendo una bajada de la tasa al 20%. Los diagnósticos asistidos por ordenador (CAD) tratan de ayudar a los médicos en este proceso, mejorando los resultados y ahorrando tiempo. Los expertos en cáncer de mama emplean la clasificación BI-RADS para describir tumores, estimar su malignidad y establecer el tratamiento a seguir. Mientras la mayoría de sistemas CAD se limitan a clasificar imágenes según su malignidad, presentamos un modelo basado en dos sistemas para la detección y descripción en lenguaje BI-RADS de tumores en tiempo real. El primer sistema es un algoritmo de detección basado en YOLO que obtiene una precisión de 0.965, una exhaustividad de 0.95 y un área bajo la curva de precisión-exhaustividad de 0.97. El segundo es un sistema de descripción que recibe el tumor detectado y devuelve, en lenguaje natural, su descripción en BI-RADS y una estimación de su malignidad. Para este sistema hemos realizado tres experimentos en colaboración con una radióloga experta en mama y hemos obtenido unos valores de concordancia con sus diagnósticos que se encuentran entre los valores de intercorrelación e intracorrelación entre expertos que hemos encontrado en la literatura. Además, observamos que entrenar los modelos con los descriptores BI-RADS mejora la clasificación según malignidad y los acerca al razonamiento experto.Publicación A historical perspective of algorithmic lateral inhibition and accumulative computation in computer vision(Elsevier, 2011-03) Fernández Caballero, Antonio; Carmona, Enrique J.; Delgado, Ana Esperanza; López López, María DoloresCertainly, one of the prominent ideas of Professor José Mira was that it is absolutely mandatory to specify the mechanisms and/or processes underlying each task and inference mentioned in an architecture in order to make operational that architecture. The conjecture of the last fifteen years of joint research has been that any bottom-up organization may be made operational using two biologically inspired methods called “algorithmic lateral inhibition”, a generalization of lateral inhibition anatomical circuits, and “accumulative computation”, a working memory related to the temporal evolution of the membrane potential. This paper is dedicated to the computational formulation of both methods. Finally, all of the works of our group related to this methodological approximation are mentioned and summarized, showing that all of them support the validity of this approximation.Publicación A historical perspective of algorithmic lateral inhibition and accumulative computation in computer vision(Elsevier, 2011-03) Fernández Caballero, Antonio; Carmona, Enrique J.; Delgado, Ana Esperanza; López López, Carmen MaríaCertainly, one of the prominent ideas of Professor José Mira was that it is absolutely mandatory to specify the mechanisms and/or processes underlying each task and inference mentioned in an architecture in order to make operational that architecture. The conjecture of the last fifteen years of joint research has been that any bottom-up organization may be made operational using two biologically inspired methods called “algorithmic lateral inhibition”, a generalization of lateral inhibition anatomical circuits, and “accumulative computation”, a working memory related to the temporal evolution of the membrane potential. This paper is dedicated to the computational formulation of both methods. Finally, all of the works of our group related to this methodological approximation are mentioned and summarized, showing that all of them support the validity of this approximation.Publicación A Knowledge Graph Framework for Dementia Research Data(MDPI, 2023-09-20) Timón Reina, Santiago; Kirsebom, Bjørn-Eivind; Fladby, Tormod; Rincón Zamorano, Mariano; Martínez Tomás, RafaelDementia disease research encompasses diverse data modalities, including advanced imaging, deep phenotyping, and multi-omics analysis. However, integrating these disparate data sources has historically posed a significant challenge, obstructing the unification and comprehensive analysis of collected information. In recent years, knowledge graphs have emerged as a powerful tool to address such integration issues by enabling the consolidation of heterogeneous data sources into a structured, interconnected network of knowledge. In this context, we introduce DemKG, an open-source framework designed to facilitate the construction of a knowledge graph integrating dementia research data, comprising three core components: a KG-builder that integrates diverse domain ontologies and data annotations, an extensions ontology providing necessary terms tailored for dementia research, and a versatile transformation module for incorporating study data. In contrast with other current solutions, our framework provides a stable foundation by leveraging established ontologies and community standards and simplifies study data integration while delivering solid ontology design patterns, broadening its usability. Furthermore, the modular approach of its components enhances flexibility and scalability. We showcase how DemKG might aid and improve multi-modal data investigations through a series of proof-of-concept scenarios focused on relevant Alzheimer’s disease biomarkers.Publicación A new Spatio-Temporal neural network approach for traffic accident forecasting(Universidad Nacional de Educación a Distancia (España). Escuela Técnica Superior de Ingeniería Informática. Departamento de Inteligencia Artificial., 2019-09-26) Medrano López, Rodrigo de; Aznarte Mellado, José LuisTraffic accidents forecasting represents a major priority for traffic governmental organisms around the world to ensure a decrease in life, property and economic losses. The increasing amounts of traffic accident data have been used to train machine learning predictors, although this is a challenging task due to the relative rareness of accidents, inter-dependencies of traffic accidents both in time and space and high dependency on human behavior. Recently, deep learning techniques have shown significant prediction improvements over traditional models, but some difficulties and open questions remain around their applicability, accuracy and ability to provide practical information. This paper proposes a new spatio-temporal deep learning framework based on a latent model for simultaneously predicting the number of traffic accidents in each neighborhood in Madrid, Spain, over varying training and prediction time horizons.Publicación A new video segmentation method of moving objects based on blob-level knowledge(Elsevier, 2008-02-01) Carmona, Enrique J.; Martínez Campos, Javier; Mira Mira, JoséVariants of the background subtraction method are broadly used for the detection of moving objects in video sequences in different applications. In this work we propose a new approach to the background subtraction method which operates in the colour space and manages the colour information in the segmentation process to detect and eliminate noise. This new method is combined with blob-level knowledge associated with different types of blobs that may appear in the foreground. The idea is to process each pixel differently according to the category to which it belongs: real moving objects, shadows, ghosts, reflections, fluctuation or background noise. Thus, the foreground resulting from processing each image frame is refined selectively, applying at each instant the appropriate operator according to the type of noise blob we wish to eliminate. The approach proposed is adaptive, because it allows both the background model and threshold model to be updated. On the one hand, the results obtained confirm the robustness of the method proposed in a wide range of different sequences and, on the other hand, these results underline the importance of handling three colour components in the segmentation process rather than just the one grey-level component.Publicación A novel approach to the placement problem for FPGAs based on genetic algorithms.(Universidad Nacional de Educación a Distancia (España). Escuela Técnica Superior de Ingeniería Informática. Departamento de Inteligencia Artificial, 2017-07-07) Veredas Ramírez, Francisco Javier; Carmona Suárez, Enrique JavierThis Master's thesis investigates the critical path optimization in the FPGA's placement. An initial investigation of the FPGA's placement problem shows that the minimization of the traditional cost function used in the simulated annealing's placement not always produce a minimal critical path. Therefore, it is proposed to use the routing algorithm as a cost function to improve the nal critical path. The experimental results conrm that this new cost function has better quality results than the traditional cost function, at the expenses of longer execution time. A genetic algorithm using the routing algorithm as a cost function is found to reduce the execution time meanwhile is maintained a minimal critical path. The use of genetic algorithms with the new cost function will be useful in those cases where a minimum critical path is needed. Furthermore, this work investigates the use of genetic algorithm using the traditional cost function. In this case, no better critical path in comparison with a simulated annealing's placement is observed.Publicación A Probabilistic Graphical Model for Total Knee Arthroplasty(Universidad Nacional de Educación a Distancia (España). Escuela Técnica Superior de Ingeniería Informática. Departamento de Inteligencia Artificial., 2011-07-13) León Guerra, Diego; Díez Vegas, JavierPublicación A quantum evolutionary approach to solving the team formation problem in social networks(Universidad Nacional de Educación a Distancia (España). Escuela Técnica Superior de Ingeniería Informática. Departamento de Inteligencia Artificial, 2019-09-24) Álvarez Lois, Pedro Pablo; Fernández Galán, SeverinoRecent advances in information and communication technologies have led to the expansion of collaborative work. Complex problems in science, engineering, or business are being solved by teams of people working closely with one another. However, forming teams of experts is a computationally challenging problem that requires powerful solution techniques. A metaheuristic algorithm that incorporates some of the principles of quantum computing into an evolutionary structure is presented. The resulting Quantum Evolutionary Algorithm (QEA) has the ability to produce an adequate balance between intensification and diversification during the search process. Numerical experiments have shown that the QEA is able to significantly improve the quality of the solutions for hard instances of the team formation problem, particularly when compared to a standard genetic algorithm. The successful performance of the algorithm requires careful parameter tuning, as well as a mechanism to effectively share information across the population of candidate solutions.Publicación A Survey of Video Datasets for Human Action and Activity Recognition(Elsevier, 2013-06) Chaquet, José M.; Carmona, Enrique J.; Fernández Caballero, AntonioVision-based human action and activity recognition has an increasing importance among the computer vision community with applications to visual surveillance, video retrieval and human–computer interaction. In recent years, more and more datasets dedicated to human action and activity recognition have been created. The use of these datasets allows us to compare different recognition systems with the same input data. The survey introduced in this paper tries to cover the lack of a complete description of the most important public datasets for video-based human activity and action recognition and to guide researchers in the election of the most suitable dataset for benchmarking their algorithms.Publicación Advanced Control by Reinforcement Learning for Wastewater Treatment Plants: A Comparison with Traditional Approaches(MDPI, 2023) Gorrotxategi Zipitria, Mikel; Hernández del Olmo, Félix; Gaudioso Vázquez, Elena; Duro Carralero, Natividad; Dormido Canto, RaquelControl mechanisms for biological treatment of wastewater treatment plants are mostly based on PIDS. However, their performance is far from optimal due to the high non-linearity of the biological and changing processes involved. Therefore, more advanced control techniques are proposed in the literature (e.g., using artificial intelligence techniques). However, these new control techniques have not been compared to the traditional approaches that are actually being used in real plants. To this end, in this paper, we present a comparison of the PID control configurations currently applied to control the dissolved oxygen concentration (in the active sludge process) against a reinforcement learning agent. Our results show that it is possible to have a very competitive operating cost budget when these innovative techniques are applied.Publicación Agricultura de precisión. Optimización del uso de herbicidas mediante visión artificial(Universidad Nacional de Educación a Distancia (España). Escuela Técnica Superior de Ingeniería Informática. Departamento de Inteligencia Artificial, 2023-03-14) Pina Herce, Luis Enrique; Pastor Vargas, RafaelDurante miles de años, los agricultores han buscado formas de aumentar la producción de alimentos en parcelas. A medida que los equipos y la tecnología han evolucionado, las granjas se han vuelo más grandes y los rendimientos han aumentado. Sin embargo, este desafío continua hoy en día y la versión moderna ha recibido un nombre: Agricultura de precisión. La presente memoria busca estudiar una de las ramas a explotar en la agricultura de precisión, la aplicación de herbicidas, mediante el uso de la visión artificial. El proyecto se centra en investigar un modelo rápido y eficiente para la detección de mala hierba en imágenes. Este modelo es la pieza principal de un sistema que recibe imágenes del suelo en tiempo real, y en función de lo que ve, aplica o no el herbicida sobre el suelo.Publicación Agrupación automática de mensajes de foros(Universidad Nacional de Educación a Distancia (España). Escuela Técnica Superior de Ingeniería Informática., 2024) Priego Wood, MartínLos foros de discusión permiten formular preguntas y obtener respuestas aprovechando la denominada sabiduría de las masas, y se han convertido en herramientas esenciales de cursos en línea, como los de la UNED. Los foros suelen estar divididos en subforos dedicados a temas específicos, pero a menudo los usuarios escriben mensajes en el subforo equivocado, lo que dificulta su visibilidad y puede hacer necesaria una reubicación manual. Para ayudar a prevenir estos errores y aliviar las tareas de mantenimiento, en este trabajo se desarrolla un sistema que agrupa automáticamente foros como los de la UNED y permite medir la similitud semántica entre mensajes. Asimismo, dada una estructura de subforos llena de mensajes y un mensaje nuevo, el sistema es capaz de generar recomendaciones de inserción basadas en similitud. El trabajo incluye una parte investigativa fundamental en la que se lleva a cabo un análisis exploratorio de 7 foros de la UNED y se experimenta con diversas técnicas de procesamiento de lenguaje natural y de aprendizaje no supervisado. Por ejemplo, se ensaya con representaciones vectoriales de documentos de tipo bolsa de palabras así como con otras más modernas, como los embeddings de palabras e incluso de frases. Los mejores resultados se obtienen con versiones ponderadas de la bolsa de palabras y con modelos multilingües de codificación de frases pre-entrenados. En cuanto a la similitud entre mensajes, las métricas coseno y angular producen resultados parecidos, mas la segunda tiene la posible ventaja de ser propiamente una distancia. Por ´ultimo, se prueban los algoritmos de clustering k-medias, aglomerativo y HDBSCAN, que también es jerárquico pero basado en densidad. Los agrupamientos se evalúan usando medidas externas, como la información mutua ajustada, y también internas, como la silueta y el índice de validación basado en densidad. El algoritmo k-medias consigue el mejor alineamiento medio con la estructura de subforos original, pero los otros dos tienen asimismo ventajas, en cuanto a tiempo de ejecución y la información adicional que proporcionan sus jerarquías. El método HDBSCAN destaca por su flexibilidad, robustez y el carácter intuitivo de sus parámetros. El sistema de agrupación desarrollado es capaz de identificar por sí solo grupos que tienen pleno sentido. En ocasiones, dichos grupos son subconjuntos de un subforo original, e incluso pueden ser parientes cercanos de otros subconjuntos del mismo subforo en un agrupamiento jerárquico. Otras veces, los grupos generados son transversales a la estructura original, debido a la presencia de mensajes semejantes, por ejemplo agradecimientos, a través de los subforos. Aun cuando la estructura original resulte difícil de reproducir automáticamente, el ranking de similitud creado por el sistema debería de facilitar la colocación correcta de mensajes nuevos.Publicación Alf : un entorno abierto para el desarrollo de comunidades virtuales de trabajo y cursos adaptados a la educación superior(2005-02-23) Raffenne, Emmanuelle; Aguado, M.; Arroyo, D.; Cordova, M. A.; Guzmán Sánchez, José Luis; Hermira, S.; Ortíz, J.; Pesquera, A.; Morales, R.; Romojaro Gómez, Héctor; Valiente, S.; Carmona, G.; Tejedor, D.; Alejo, J. A.; García Saiz, Tomás; González Boticario, Jesús; Pastor Vargas, RafaelAlf, entorno de trabajo, comunidades virtuales, enseñanza superiorPublicación An Adaptive, Comprehensive Application to Support Home-Based Visual Training for Children With Low Vision(IEEE, 2019) Matas, Yolanda; Santos, Carlos; Hernández del Olmo, Félix; Gaudioso Vázquez, ElenaLow vision is a visual impairment that cannot be improved by standard vision aids such as glasses. Therefore, to improve their visual skills, people affected by low vision usually follow a visual training program planned and supervised by an expert in this eld. Visual training is especially suitable for children because of their plasticity for learning. However, due to a lack of specialists, training sessions are usually less frequent than optimal. Thus, home-based visual training has emerged as a solution to this problem because it can be undertaken by experts and families together. We implemented the Visual Stimulation on the Internet (EVIN) application, which provides comprehensive visual training tasks through games. It also provides reports on children's performance in these visual training tasks. Although EVIN has shown its usefulness in previous works, two main solutions are needed: (i) a support setup to help experts and families work together to address, among other things, the large variety of exercises and different congurations that can be prescribed and (ii) a rigorous experimental design to compare children trained with EVIN and those trained with traditional materials. To face these challenges, we present an adaptive version of EVIN that provides a new design tool that allows experts to plan visual training tasks through templates in advance. In addition, we developed new metrics and reports to achieve a more accurate assessment of a child's improvement. Among other results, it allowed us to develop an reliable experiment to evaluate signicant improvements in children trained with EVIN.Publicación An Analysis of Multiple Criteria and Setups for Bluetooth Smartphone-Based Indoor Localization Mechanism(Hindawi, 2017-10-23) Lovón Melgarejo, Jesús; Bravo Rocca, Gusseppe; Orozco Barbosa, Luis; García Varea, Ismael; Castillo Cara, José ManuelBluetooth Low Energy (BLE) 4.0 beacons will play a major role in the deployment of energy-efficient indoor localization mechanisms. Since BLE4.0 is highly sensitive to fast fading impairments, numerous ongoing studies are currently exploring the use of supervised learning algorithm as an alternative approach to exploit the information provided by the indoor radio maps. Despite the large number of results reported in the literature, there are still many open issues on the performance evaluation of such approach. In this paper, we start by identifying, in a simple setup, the main system parameters to be taken into account on the design of BLE4.0 beacons-based indoor localization mechanisms. In order to shed some light on the evaluation process using supervised learning algorithm, we carry out an in-depth experimental evaluation in terms of the mean localization error, local prediction accuracy, and global prediction accuracy. Based on our results, we argue that, in order to fully assess the capabilities of supervised learning algorithms, it is necessary to include all the three metrics.Publicación An Artificial Intelligence Approach for Generalizability of Cognitive Impairment Recognition in Language(Universidad Nacional de Educación a Distancia (España). Escuela Técnica Superior de Ingeniería Informática. Departamento de Inteligencia Artificial, 2022-02-01) González Machorro, Mónica; Martínez Tomás, RafaelIntroducción: Trastornos en el lenguaje se considera uno de los primeros signos del deterioro cognitivo. Objetivos: Un reto, sin embargo, es la desconexión entre los resultados obtenidos en previas investigaciones y su aplicación en contextos clínicos. Esto se debe, en gran parte, a la falta de estandarización y de datos en este campo. La propuesta de este trabajo es emplear técnicas de inteligencia artificial para abordar este reto: la generalización. Metodología: En este trabajo estudiamos el lenguaje en dos modalidades: el habla, que se refiere a la manifestación acústica del lenguaje y la información lingüística entendida como la gramática. Para la primera modalidad empleamos grabaciones y para la segunda transcripciones de las grabaciones. El conjunto de datos empleado es un subconjunto del Corpus Pitt que contiene pacientes con deterioro cognitivo leve y Alzheimer. Nuestra propuesta incluye explorar métodos de aprendizaje transferido y end-to-end tales como wav2vec, HuBERT, BERT y RoBERTa; aplicar herramientas de ASR para obtener transcripciones automáticas, explorar variables que sean independientes de la lengua y del contenido; analizar la unidad del habla más pequeñas: los fonemas; y por último, evaluar los métodos más prometedores en un conjunto de datos externo. Resultados: Los resultados demostraron que, en el caso de métodos de aprendizaje de transferencia, la modalidad acústica no solo proporciona una solución independiente del contenido lingüístico, sino que también obtiene un mayor rendimiento que aquellos métodos basados en transcripciones producidas mediante herramientas de ASR. Los resultados también demuestran que los métodos de la modalidad lingüística son más robustos que los de la modalidad acústica. Conclusión: Este trabajo destaca la necesidad de una herramienta ASR adecuada para la transcripción de demencia y de explorar el habla espontánea. La mayor aportación es la aplicación exitosa de modelos end-to-end y de aprendizaje transferido en la detección de demencia.