Artículos y papers
URI permanente para esta colección
Examinar
Examinando Artículos y papers por Centro "Facultades y escuelas::E.T.S. de Ingeniería Informática"
Mostrando 1 - 20 de 80
Resultados por página
Opciones de ordenación
Publicación A 3-D Simulation of a Single-Sided Linear Induction Motor with Transverse and Longitudinal Magnetic Flux(MDPI, 2020) Domínguez Hernández, Juan Antonio; Duro Carralero, Natividad; Gaudioso Vázquez, Elena; https://orcid.org/0000-0002-6437-5878This paper presents a novel and improved configuration of a single-sided linear induction motor. The geometry of the motor has been modified to be able to operate with a mixed magnetic flux configuration and with a new configuration of paths for the eddy currents induced inside the aluminum plate. To this end, two slots of dielectric have been introduced into the aluminum layer of the moving part with a dimension of 1 mm, an iron yoke into the primary part, and lastly, the width of the transversal slots has been optimized. Specifically, in the enhanced motor, there are two magnetic fluxes inside the motor that circulate across two different planes: a longitudinal magnetic flux which goes along the direction of the movement and a transversal magnetic flux which is closed through a perpendicular plane with respect to that direction. With this new configuration, the motor achieves a great increment of the thrust force without increasing the electrical supply. In addition, the proposed model creates a new spatial configuration of the eddy currents and an improvement of the main magnetic circuit. These novelties are relevant because they represent a great improvement in the efficiency of the linear induction motor for low velocities at a very low cost. All simulations have been made with the finite elements method—3D, both in standstill conditions and in motion in order to obtain the characteristic curves of the main forces developed by the linear induction motor.Publicación A bibliometric analysis of off-line handwritten document analysis literature (1990–2020)(Elsevier, 2022-05) Ruiz Parrado, Victoria; Vélez, José F.; Heradio Gil, Rubén; Aranda Escolástico, Ernesto; Sánchez Ávila, ÁngelProviding computers with the ability to process handwriting is both important and challenging, since many difficulties (e.g., different writing styles, alphabets, languages, etc.) need to be overcome for addressing a variety of problems (text recognition, signature verification, writer identification, word spotting, etc.). This paper reviews the growing literature on off-line handwritten document analysis over the last thirty years. A sample of 5389 articles is examined using bibliometric techniques. Using bibliometric techniques, this paper identifies (i) the most influential articles in the area, (ii) the most productive authors and their collaboration networks, (iii) the countries and institutions that have led research on the topic, (iv) the journals and conferences that have published most papers, and (v) the most relevant research topics (and their related tasks and methodologies) and their evolution over the years.Publicación A block-based model for monitoring of human activity(Elsevier, 2011-03) Folgado Zuñiga, Encarnación; Carmona, Enrique J.; Rincón Zamorano, Mariano; Bachiller Mayoral, MargaritaThe study of human activity is applicable to a large number of science and technology fields, such as surveillance, biomechanics or sports applications. This article presents BB6-HM, a block-based human model for real-time monitoring of a large number of visual events and states related to human activity analysis, which can be used as components of a library to describe more complex activities in such important areas as surveillance, for example, luggage at airports, clients’ behaviour in banks and patients in hospitals. BB6-HM is inspired by the proportionality rules commonly used in Visual Arts, i.e., for dividing the human silhouette into six rectangles of the same height. The major advantage of this proposal is that analysis of the human can be easily broken down into regions, so that we can obtain information of activities. The computational load is very low, so it is possible to define a very fast implementation. Finally, this model has been applied to build classifiers for the detection of primitive events and visual attributes using heuristic rules and machine learning techniques.Publicación A historical perspective of algorithmic lateral inhibition and accumulative computation in computer vision(Elsevier, 2011-03) Fernández Caballero, Antonio; Carmona, Enrique J.; Delgado, Ana Esperanza; López López, Carmen MaríaCertainly, one of the prominent ideas of Professor José Mira was that it is absolutely mandatory to specify the mechanisms and/or processes underlying each task and inference mentioned in an architecture in order to make operational that architecture. The conjecture of the last fifteen years of joint research has been that any bottom-up organization may be made operational using two biologically inspired methods called “algorithmic lateral inhibition”, a generalization of lateral inhibition anatomical circuits, and “accumulative computation”, a working memory related to the temporal evolution of the membrane potential. This paper is dedicated to the computational formulation of both methods. Finally, all of the works of our group related to this methodological approximation are mentioned and summarized, showing that all of them support the validity of this approximation.Publicación A keyphrase-based approach for interpretable ICD-10 code classification of Spanish medical reports(Elsevier, 2021) Fabregat Marcos, Hermenegildo; Duque Fernández, Andrés; Araujo Serna, M. Lourdes; Martínez Romo, JuanBackground and objectives: The 10th version of International Classification of Diseases (ICD-10) codification system has been widely adopted by the health systems of many countries, including Spain. However, manual code assignment of Electronic Health Records (EHR) is a complex and time-consuming task that requires a great amount of specialised human resources. Therefore, several machine learning approaches are being proposed to assist in the assignment task. In this work we present an alternative system for automatically recommending ICD-10 codes to be assigned to EHRs. Methods: Our proposal is based on characterising ICD-10 codes by a set of keyphrases that represent them. These keyphrases do not only include those that have literally appeared in some EHR with the considered ICD-10 codes assigned, but also others that have been obtained by a statistical process able to capture expressions that have led the annotators to assign the code. Results: The result is an information model that allows to efficiently recommend codes to a new EHR based on their textual content. We explore an approach that proves to be competitive with other state-of-the-art approaches and can be combined with them to optimise results. Conclusions: In addition to its effectiveness, the recommendations of this method are easily interpretable since the phrases in an EHR leading to recommend an ICD-10 code are known. Moreover, the keyphrases associated with each ICD-10 code can be a valuable additional source of information for other approaches, such as machine learning techniques.Publicación A Monte Carlo tree search conceptual framework for feature model analyses(Elsevier, 2023-01) Horcas, José Miguel; Galindo, José A.; Benavides, David; Heradio Gil, Rubén; Fernández Amoros, David JoséChallenging domains of the future such as Smart Cities, Cloud Computing, or Industry 4.0 expose highly variable systems with colossal configuration spaces. The automated analysis of those systems’ variability has often relied on SAT solving and constraint programming. However, many of the analyses have to deal with the uncertainty introduced by the fact that undertaking an exhaustive exploration of the whole configuration space is usually intractable. In addition, not all analyses need to deal with the configuration space of the feature models, but with different search spaces where analyses are performed over the structure of the feature models, the constraints, or the implementation artifacts, instead of configurations. This paper proposes a conceptual framework that tackles various of those analyses using Monte Carlo tree search methods, which have proven to succeed in vast search spaces (e.g., game theory, scheduling tasks, security, program synthesis, etc.). Our general framework is formally described, and its flexibility to cope with a diversity of analysis problems is discussed. We provide a Python implementation of the framework that shows the feasibility of our proposal, identifying up to 11 lessons learned, and open challenges about the usage of the Monte Carlo methods in the software product line context. With this contribution, we envision that different problems can be addressed using Monte Carlo simulations and that our framework can be used to advance the state-of-the-art one step forward.Publicación A new video segmentation method of moving objects based on blob-level knowledge(Elsevier, 2008-02-01) Carmona, Enrique J.; Martínez Campos, Javier; Mira Mira, JoséVariants of the background subtraction method are broadly used for the detection of moving objects in video sequences in different applications. In this work we propose a new approach to the background subtraction method which operates in the colour space and manages the colour information in the segmentation process to detect and eliminate noise. This new method is combined with blob-level knowledge associated with different types of blobs that may appear in the foreground. The idea is to process each pixel differently according to the category to which it belongs: real moving objects, shadows, ghosts, reflections, fluctuation or background noise. Thus, the foreground resulting from processing each image frame is refined selectively, applying at each instant the appropriate operator according to the type of noise blob we wish to eliminate. The approach proposed is adaptive, because it allows both the background model and threshold model to be updated. On the one hand, the results obtained confirm the robustness of the method proposed in a wide range of different sequences and, on the other hand, these results underline the importance of handling three colour components in the segmentation process rather than just the one grey-level component.Publicación A Pragmatic Framework for Assessing Learning Outcomes in Competency-Based Courses(Institute of Electrical and Electronics Engineers, 2024-01-19) Vargas, Hector; Heradio Gil, Rubén; Farias, Gonzalo; Lei,,Zhongcheng; Torre, Luis de laContribution: A competency assessment framework that enables learning analytics for course monitoring and continuous improvement. Our work fills the gap in systematic methods for competency assessment in higher education. Background: Many institutions are shifting toward competency-based education, thus encouraging their educators to start evaluating their students under this paradigm. Previous research shows that structured assessment models are fundamental in guiding educators toward this adoption. Intended outcomes: An assessment model for competency-based education that is easy to adopt and use, while facilitating the application of learning analytics techniques. Application design: The new framework considerably extends a prior model we proposed three years ago. Two engineering competency-based courses used the framework for assessment. Assessment rubrics were prepared and used for evaluating and collecting the students’ data progressively, thus enabling the use of learning analytics for decision-making. Findings: Thanks to the model, (i) students received a detailed report of their achievements, including a thorough explanation and justification of the evaluation criteria; and (ii) instructors could improve the course and provide objective evidence of their actions to quality assurance agencies. As a result, the framework is presently being used in fifteen courses taught at eight different university degrees at the Pontifical Catholic University of Valparaiso (PUCV).Publicación A Survey of Video Datasets for Human Action and Activity Recognition(Elsevier, 2013-06) Chaquet, José M.; Carmona, Enrique J.; Fernández Caballero, AntonioVision-based human action and activity recognition has an increasing importance among the computer vision community with applications to visual surveillance, video retrieval and human–computer interaction. In recent years, more and more datasets dedicated to human action and activity recognition have been created. The use of these datasets allows us to compare different recognition systems with the same input data. The survey introduced in this paper tries to cover the lack of a complete description of the most important public datasets for video-based human activity and action recognition and to guide researchers in the election of the most suitable dataset for benchmarking their algorithms.Publicación AAtt-CNN: Automatic Attention-Based Convolutional Neural Networks for Hyperspectral Image Classification(IEEE, 2023) Paoletti, Mercedes Eugenia; Moreno Álvarez, Sergio; xue, yu; Haut, Juan M.; Plaza, Antonio; https://orcid.org/0000-0003-1030-3729; https://orcid.org/0000-0002-9069-7547; https://orcid.org/0000-0001-6701-961X; https://orcid.org/0000-0002-9613-1659Convolutional models have provided outstanding performance in the analysis of hyperspectral images (HSIs). These architectures are carefully designed to extract intricate information from nonlinear features for classification tasks. Notwithstanding their results, model architectures are manually engineered and further optimized for generalized feature extraction. In general terms, deep architectures are time-consuming for complex scenarios, since they require fine-tuning. Neural architecture search (NAS) has emerged as a suitable approach to tackle this shortcoming. In parallel, modern attention-based methods have boosted the recognition of sophisticated features. The search for optimal neural architectures combined with attention procedures motivates the development of this work. This article develops a new method to automatically design and optimize convolutional neural networks (CNNs) for HSI classification using channel-based attention mechanisms. Specifically, 1-D and spectral–spatial (3-D) classifiers are considered to handle the large amount of information contained in HSIs from different perspectives. Furthermore, the proposed automatic attention-based CNN ( AAtt-CNN ) method meets the requirement to lower the large computational overheads associated with architectural search. It is compared with current state-of-the-art (SOTA) classifiers. Our experiments, conducted using a wide range of HSI images, demonstrate that AAtt-CNN succeeds in finding optimal architectures for classification, leading to SOTA results.Publicación Analytical Communication Performance Models as a metric in the partitioning of data-parallel kernels on heterogeneous platforms(Springer, 2019) Rico Gallego, Juan Antonio; Díaz Martín, Juan Carlos; Calvo Jurado, Carmen; Moreno Álvarez, Sergio; García Zapata, Juan Luis; https://orcid.org/0000-0002-4264-7473; https://orcid.org/0000-0002-8435-3844; https://orcid.org/0000-0001-9842-081X; https://orcid.org/0000-0003-1419-1672Data partitioning on heterogeneous HPC platforms is formulated as an optimization problem. The algorithm departs from the communication performance models of the processes representing their speeds and outputs a data tiling that minimizes the communication cost. Traditionally, communication volume is the metric used to guide the partitioning, but such metric is unable to capture the complexities introduced by uneven communication channels and the variety of patterns in the kernel communications. We discuss Analytical Communication Performance Models as a new metric in partitioning algorithms. They have not been considered in the past because of two reasons: prediction inaccuracy and lack of tools to automatically build and solve kernel communication formal expressions. We show how communication performance models fit the specific kernel and platform, and we present results that equal or even improve previous volume-based strategies.Publicación An anytime optimal control strategy for multi-rate systems(IEEE , 2017-02-20) Aranda Escolástico, Ernesto; Guinaldo Losada, María; Ángel Cuenca; Julián Salt; Dormido Canto, Sebastián; https://orcid.org/0000-0003-4466-2666; https://orcid.org/0000-0002-9640-2658In this work, we study a dual-rate system with fast-sampling at the input and propose a design to optimize the consecutive control signals. The objective of the optimization is to maximize the decay rate depending on the available resources to stabilize faster the control system. Stability conditions are enunciated in terms of Linear Matrix Inequalities (LMIs). The control solution is extended to time delays. A numerical example illustrates the benefits of the control proposal.Publicación Asynchronous periodic event-triggered control with dynamical controllers(Elsevier, 2018-04-20) Aranda Escolástico, Ernesto; Rodríguez, Carlos; Guinaldo Losada, María; Guzmán, José Luis; Dormido Canto, SebastiánIn this work, we study a networked control system under a periodic eventtriggered control strategy. In addition, the input and the output of the system are sampled with different rates, which enables to obtain a compromise between performance and waste of communication resources. Stability analysis and L2-gain analysis are carried out through Lyapunov-Krasovskii techniques. Simulation results of a quadruple-tank process show the benefits of the approach.Publicación Automatic design of analog electronic circuits using grammatical evolution(Elsevier, 2018-01) Castejón, Federico; Carmona, Enrique J.A new approach for automatic synthesis of analog electronic circuits based on grammatical evolution is presented. Grammatical evolution is an evolutionary algorithm based on grammar which can generate code in any programming language and uses variable length linear binary strings. The decoding of each chromosome determines which production rules in a Backus-Naur Form grammar definition are used in a genotype-to-phenotype mapping process. In our method, decoding focuses on obtaining circuit netlists. A new grammar for generating such netlists and a variant of the XOSites-based crossover operator are also presented. A post-processing stage is needed to adapt the decoded netlist prior its evaluation using the NGSpice simulator. Our approach was applied to several case studies, comprising a total of seven benchmark circuits. A comparison with previous works in the literature shows that our method produces competitive circuits in relation to the degree of compliance with the output specifications, the number of components and the number of evaluations used in the evolutionary process.Publicación A bibliometric analysis of 10 years of research on symptom networks in psychopathology and mental health(Elsevier, 2022-02) Ausín, Berta; Castellanos, Miguel Ángel; González Sanguino, Clara; Heradio Gil, RubénPsychopathology networks consist of aspects (e.g., symptoms) of mental disorders (nodes) and the connections between those aspects (edges). This article aims to analyze the research literature on network analysis in psychopathology and mental health for the last ten years. Statistical descriptive analysis was complemented with two bibliometric techniques: performance analysis and co-word analysis. There is an increase in publications that has passed from 1 article published in 2010 to 172 papers published in 2020. The 398 articles in the sample have 1,910 authors in total, being most of them occasional contributors. The Journal of Affective Disorders is the one with the highest number of publications on network analysis in psychopathology and mental health, followed by the Journal of Abnormal Psychology and Psychological Medicine stand out. The present study shows that this perspective in psychopathology and mental health is a recent field of study, but with solid advances in recent years from a wide variety of researchers, mainly from USA and Europe, who have extensively studied symptom networks in depression, anxiety, and post-traumatic stress disorders. However, gaps are identified in other psychological behaviors such as suicide, populations such as the elderly, and gender studies.Publicación Building a framework for fake news detection in the health domain(San Francisco CA: Public Library of Science, 2024-07-08) Martinez Rico, Juan R.; Araujo Serna, M. Lourdes; Martínez Romo, Juan; Bongelli, RamonaDisinformation in the medical field is a growing problem that carries a significant risk. Therefore, it is crucial to detect and combat it effectively. In this article, we provide three elements to aid in this fight: 1) a new framework that collects health-related articles from verification entities and facilitates their check-worthiness and fact-checking annotation at the sentence level; 2) a corpus generated using this framework, composed of 10335 sentences annotated in these two concepts and grouped into 327 articles, which we call KEANE (faKe nEws At seNtence lEvel); and 3) a new model for verifying fake news that combines specific identifiers of the medical domain with triplets subject-predicate-object, using Transformers and feedforward neural networks at the sentence level. This model predicts the fact-checking of sentences and evaluates the veracity of the entire article. After training this model on our corpus, we achieved remarkable results in the binary Classification of sentences (check-worthiness F1: 0.749, fact-checking F1: 0.698) and in the final classification of complete articles (F1: 0.703). We also tested its performance against another public dataset and found that it performed better than most systems evaluated on that dataset. Moreover, the corpus we provide differs from other existing corpora in its duality of sentence-article annotation, which can provide an additional level of justification of the prediction of truth or untruth made by the model.Publicación Can deep learning techniques improve classification performance of vandalism detection in Wikipedia?(Elsevier, 2019) Martinez-Rico, Juan R.; Martínez Romo, Juan; Araujo Serna, M. LourdesWikipedia is a free encyclopedia created as an international collaborative project. One of its peculiarities is that any user can edit its contents almost without restrictions, what has given rise to a phenomenon known as vandalism. Vandalism is any attempt that seeks to damage the integrity of the encyclopedia deliberately. To address this problem, in recent years several automatic detection systems and associated features have been developed. This work implements one of these systems, which uses three sets of new features based on different techniques. Specifically we study the applicability of a leading technology as deep learning to the problem of vandalism detection. The first set is obtained by expanding a list of vandal terms taking advantage of the existing semantic-similarity relations in word embeddings and deep neural networks. Deep learning techniques are applied to the second set of features, specifically Stacked Denoising Autoencoders (SDA), in order to reduce the dimensionality of a bag of words model obtained from a set of edits taken from Wikipedia. The last set uses graph-based ranking algorithms to generate a list of vandal terms from a vandalism corpus extracted from Wikipedia. These three sets of new features are evaluated separately as well as together to study their complementarity, improving the results in the state of the art. The system evaluation has been carried out on a corpus extracted from Wikipedia (WP_Vandal) as well as on another called PAN-WVC-2010 that was used in a vandalism detection competition held at CLEF conference.Publicación Cloud Implementation of Extreme Learning Machine for Hyperspectral Image Classification(IEEE, 2023) Haut, Juan M.; Moreno Álvarez, Sergio; Moreno Ávila, Enrique; Ayma Quirita, Victor Andrés; Pastor Vargas, Rafael; Paoletti, Mercedes Eugenia; https://orcid.org/0000-0001-6701-961X; https://orcid.org/0000-0003-2987-2761; https://orcid.org/0000-0002-4089-9538; https://orcid.org/0000-0003-1030-3729Classifying remotely sensed hyperspectral images (HSIs) became a computationally demanding task given the extensive information contained throughout the spectral dimension. Furthermore, burgeoning data volumes compound inherent computational and storage challenges for data processing and classification purposes. Given their distributed processing capabilities, cloud environments have emerged as feasible solutions to handle these hurdles. This encourages the development of innovative distributed classification algorithms that take full advantage of the processing capabilities of such environments. Recently, computational-efficient methods have been implemented to boost network convergence by reducing the required training calculations. This letter develops a novel cloud-based distributed implementation of the extreme learning machine ( CC-ELM ) algorithm for efficient HSI classification. The proposal implements a fault-tolerant and scalable computing design while avoiding traditional batch-based backpropagation. CC-ELM has been evaluated over state-of-the-art HSI classification benchmarks, yielding promising results and proving the feasibility of cloud environments for large remote sensing and HSI data volumes processing. The code available at https://github.com/mhaut/scalable-ELM-HSIPublicación Cloud-Based Analysis of Large-Scale Hyperspectral Imagery for Oil Spill Detection(IEEE, 2024) Haut, Juan M.; Moreno Álvarez, Sergio; Pastor Vargas, Rafael; Pérez García, Ámbar; Paoletti, Mercedes Eugenia; https://orcid.org/0000-0001-6701-961X; https://orcid.org/0000-0002-4089-9538; https://orcid.org/0000-0002-2943-6348; https://orcid.org/0000-0003-1030-3729Spectral indices are of fundamental importance in providing insights into the distinctive characteristics of oil spills, making them indispensable tools for effective action planning. The normalized difference oil index (NDOI) is a reliable metric and suitable for the detection of coastal oil spills, effectively leveraging the visible and near-infrared (VNIR) spectral bands offered by commercial sensors. The present study explores the calculation of NDOI with a primary focus on leveraging remotely sensed imagery with rich spectral data. This undertaking necessitates a robust infrastructure to handle and process large datasets, thereby demanding significant memory resources and ensuring scalability. To overcome these challenges, a novel cloud-based approach is proposed in this study to conduct the distributed implementation of the NDOI calculation. This approach offers an accessible and intuitive solution, empowering developers to harness the benefits of cloud platforms. The evaluation of the proposal is conducted by assessing its performance using the scene acquired by the airborne visible infrared imaging spectrometer (AVIRIS) sensor during the 2010 oil rig disaster in the Gulf of Mexico. The catastrophic nature of the event and the subsequent challenges underscore the importance of remote sensing (RS) in facilitating decision-making processes. In this context, cloud-based approaches have emerged as a prominent technological advancement in the RS field. The experimental results demonstrate noteworthy performance by the proposed cloud-based approach and pave the path for future research for fast decision-making applications in scalable environments.Publicación Comparative Evaluation of the Fast Marching Method and the Fast Evacuation Method for Heterogeneous Media(Taylor & Francis, 2021-08-30) Fernández Galán, SeverinoThe evacuation problem is usually addressed by assuming homogeneous media where pedestrians move freely in the presence of several exits and obstacles. From a more general perspective, this work considers heterogeneous media in which the velocity of pedestrians depends on their location. We use cellular automata with a floor field that indicates promis- ing movements to pedestrians and, in this context, we extend two competitive evacuation methods in order for them to be applied to heterogeneous media: the Fast Marching Method and the Fast Evacuation Method. Furthermore, we evaluate the performance that these two methods exhibit over different simulated scenarios characterized by the presence of hetero- geneous media. The resulting winning method in terms of evacuation effectiveness is greatly influenced by the particular problem being simulated.