Artículos y papers
URI permanente para esta colección
Examinar
Examinando Artículos y papers por Palabra clave "12 Matemáticas::1203 Ciencia de los ordenadores ::1203.17 Informática"
Mostrando 1 - 20 de 64
Resultados por página
Opciones de ordenación
Publicación A 3-D Simulation of a Single-Sided Linear Induction Motor with Transverse and Longitudinal Magnetic Flux(MDPI, 2020) Domínguez Hernández, Juan Antonio; Duro Carralero, Natividad; Gaudioso Vázquez, Elena; https://orcid.org/0000-0002-6437-5878This paper presents a novel and improved configuration of a single-sided linear induction motor. The geometry of the motor has been modified to be able to operate with a mixed magnetic flux configuration and with a new configuration of paths for the eddy currents induced inside the aluminum plate. To this end, two slots of dielectric have been introduced into the aluminum layer of the moving part with a dimension of 1 mm, an iron yoke into the primary part, and lastly, the width of the transversal slots has been optimized. Specifically, in the enhanced motor, there are two magnetic fluxes inside the motor that circulate across two different planes: a longitudinal magnetic flux which goes along the direction of the movement and a transversal magnetic flux which is closed through a perpendicular plane with respect to that direction. With this new configuration, the motor achieves a great increment of the thrust force without increasing the electrical supply. In addition, the proposed model creates a new spatial configuration of the eddy currents and an improvement of the main magnetic circuit. These novelties are relevant because they represent a great improvement in the efficiency of the linear induction motor for low velocities at a very low cost. All simulations have been made with the finite elements method—3D, both in standstill conditions and in motion in order to obtain the characteristic curves of the main forces developed by the linear induction motor.Publicación AAtt-CNN: Automatic Attention-Based Convolutional Neural Networks for Hyperspectral Image Classification(IEEE, 2023) Paoletti, Mercedes Eugenia; Moreno Álvarez, Sergio; xue, yu; Haut, Juan M.; Plaza, Antonio; https://orcid.org/0000-0003-1030-3729; https://orcid.org/0000-0002-9069-7547; https://orcid.org/0000-0001-6701-961X; https://orcid.org/0000-0002-9613-1659Convolutional models have provided outstanding performance in the analysis of hyperspectral images (HSIs). These architectures are carefully designed to extract intricate information from nonlinear features for classification tasks. Notwithstanding their results, model architectures are manually engineered and further optimized for generalized feature extraction. In general terms, deep architectures are time-consuming for complex scenarios, since they require fine-tuning. Neural architecture search (NAS) has emerged as a suitable approach to tackle this shortcoming. In parallel, modern attention-based methods have boosted the recognition of sophisticated features. The search for optimal neural architectures combined with attention procedures motivates the development of this work. This article develops a new method to automatically design and optimize convolutional neural networks (CNNs) for HSI classification using channel-based attention mechanisms. Specifically, 1-D and spectral–spatial (3-D) classifiers are considered to handle the large amount of information contained in HSIs from different perspectives. Furthermore, the proposed automatic attention-based CNN ( AAtt-CNN ) method meets the requirement to lower the large computational overheads associated with architectural search. It is compared with current state-of-the-art (SOTA) classifiers. Our experiments, conducted using a wide range of HSI images, demonstrate that AAtt-CNN succeeds in finding optimal architectures for classification, leading to SOTA results.Publicación Analytical Communication Performance Models as a metric in the partitioning of data-parallel kernels on heterogeneous platforms(Springer, 2019) Rico Gallego, Juan Antonio; Díaz Martín, Juan Carlos; Calvo Jurado, Carmen; Moreno Álvarez, Sergio; García Zapata, Juan Luis; https://orcid.org/0000-0002-4264-7473; https://orcid.org/0000-0002-8435-3844; https://orcid.org/0000-0001-9842-081X; https://orcid.org/0000-0003-1419-1672Data partitioning on heterogeneous HPC platforms is formulated as an optimization problem. The algorithm departs from the communication performance models of the processes representing their speeds and outputs a data tiling that minimizes the communication cost. Traditionally, communication volume is the metric used to guide the partitioning, but such metric is unable to capture the complexities introduced by uneven communication channels and the variety of patterns in the kernel communications. We discuss Analytical Communication Performance Models as a new metric in partitioning algorithms. They have not been considered in the past because of two reasons: prediction inaccuracy and lack of tools to automatically build and solve kernel communication formal expressions. We show how communication performance models fit the specific kernel and platform, and we present results that equal or even improve previous volume-based strategies.Publicación Anthropometric Ratios for Lower-Body Detection Based on Deep Learning and Traditional Methods(MDPI, 2022-03-04) Jaruenpunyasak, Jermphiphut; García Seco de Herrera, Alba; Duangsoithong, RakkritLower-body detection can be useful in many applications, such as the detection of falling and injuries during exercises. However, it can be challenging to detect the lower-body, especially under various lighting and occlusion conditions. This paper presents a novel lower-body detection framework using proposed anthropometric ratios and compares the performance of deep learning (convolutional neural networks and OpenPose) and traditional detection methods. According to the results, the proposed framework helps to successfully detect the accurate boundaries of the lower-body under various illumination and occlusion conditions for lower-limb monitoring. The proposed framework of anthropometric ratios combined with convolutional neural networks (A-CNNs) also achieves high accuracy (90.14%), while the combination of anthropometric ratios and traditional techniques (A-Traditional) for lower-body detection shows satisfactory performance with an averaged accuracy (74.81%). Although the accuracy of OpenPose (95.82%) is higher than the A-CNNs for lower-body detection, the A-CNNs provides lower complexity than the OpenPose, which is advantageous for lower-body detection and implementation on monitoring systems.Publicación Asymmetric delayed relay feedback identification based on the n-shifting approach(Taylor and Francis Group, 2021-08-20) Sánchez Moreno, José; Dormido Bencomo, Sebastián; Miguel Escrig, Oscar; Romero Pérez, Julio Ariel; https://orcid.org/0000-0002-2405-8771; https://orcid.org/0000-0002-2472-2038; https://orcid.org/0000-0003-3397-2239The paper presents an improvement of the n-shifting technique to identify the frequency response of an industrial process using a fully asymmetric and delaying relay. The n-shifting approach allows the calculation of n + 1 points of G(s) by an asymmetric relay experiment. This set of n points is composed of G(0), G(jωosc), . . . , G(jnωosc), being ωoscthe oscillation frequency, and where G(jωosc) is in most cases located in the third quadrant of the Nyquist map. By delaying the relay output and repeating a similar experiment it can be generated n additional points of G(s) where the first point is G(jω’ osc) with 0 < ω’osc < ωosc. In this way, it is possible to depict the full output spectrum of G(s) from zero to very high frequencies by a short relay experiment. An example of identification and tuning of a PID controller with data from the n-shifting are presented to show the validity of the approach.Publicación Automatic Recommendation of Forum Threads and Reinforcement Activities in a Data Structure and Programming Course(MDPI, 2023-09-21) Plaza Morales, Laura; Araujo Serna, M. Lourdes; López Ostenero, Fernando; Martínez Romo, JuanOnline learning is quickly becoming a popular choice instead of traditional education. One of its key advantages lies in the flexibility it offers, allowing individuals to tailor their learning experiences to their unique schedules and commitments. Moreover, online learning enhances accessibility to education, breaking down geographical and economical boundaries. In this study, we propose the use of advanced natural language processing techniques to design and implement a recommender that supports e-learning students by tailoring materials and reinforcement activities to students’ needs. When a student posts a query in the course forum, our recommender system provides links to other discussion threads where related questions have been raised and additional activities to reinforce the study of topics that have been challenging. We have developed a content-based recommender that utilizes an algorithm capable of extracting key phrases, terms, and embeddings that describe the concepts in the student query and those present in other conversations and reinforcement activities with high precision. The recommender considers the similarity of the concepts extracted from the query and those covered in the course discussion forum and the exercise database to recommend the most relevant content for the student. Our results indicate that we can recommend both posts and activities with high precision (above 80%) using key phrases to represent the textual content. The primary contributions of this research are three. Firstly, it centers on a remarkably specialized and novel domain; secondly, it introduces an effective recommendation approach exclusively guided by the student’s query. Thirdly, the recommendations not only provide answers to immediate questions, but also encourage further learning through the recommendation of supplementary activities.Publicación Building a framework for fake news detection in the health domain(San Francisco CA: Public Library of Science, 2024-07-08) Martinez Rico, Juan R.; Araujo Serna, M. Lourdes; Martínez Romo, Juan; Bongelli, RamonaDisinformation in the medical field is a growing problem that carries a significant risk. Therefore, it is crucial to detect and combat it effectively. In this article, we provide three elements to aid in this fight: 1) a new framework that collects health-related articles from verification entities and facilitates their check-worthiness and fact-checking annotation at the sentence level; 2) a corpus generated using this framework, composed of 10335 sentences annotated in these two concepts and grouped into 327 articles, which we call KEANE (faKe nEws At seNtence lEvel); and 3) a new model for verifying fake news that combines specific identifiers of the medical domain with triplets subject-predicate-object, using Transformers and feedforward neural networks at the sentence level. This model predicts the fact-checking of sentences and evaluates the veracity of the entire article. After training this model on our corpus, we achieved remarkable results in the binary Classification of sentences (check-worthiness F1: 0.749, fact-checking F1: 0.698) and in the final classification of complete articles (F1: 0.703). We also tested its performance against another public dataset and found that it performed better than most systems evaluated on that dataset. Moreover, the corpus we provide differs from other existing corpora in its duality of sentence-article annotation, which can provide an additional level of justification of the prediction of truth or untruth made by the model.Publicación Characterization of limit cycle oscillations induced by Fixed Threshold Samplers(Institute of Electrical and Electronics Engineers, 2022-06-17) Miguel Escrig, Oscar; Romero Pérez, Julio Ariel; Sánchez Moreno, José; Dormido Bencomo, Sebastián; https://orcid.org/0000-0002-2472-2038; https://orcid.org/0000-0002-2405-8771In this work, a generalized study of the conditions for the appearance of limit cycle oscillations induced by any kind of sampler with multilevel fixed thresholds is presented. These kinds of samplers, which will be referred to as Fixed Threshold Samplers (FTS), are characterized by a series of parameters, which, when selected properly, allow obtaining some of the most used forms of quantization in Event-Based Control (EBC). Because of some sampler characteristics, the obtained limit cycle oscillations can present a bias, therefore, to characterize them the Dual Input Describing Function (DIDF) method is used. The obtained DIDF is analyzed revealing some interesting properties allowing to simplify the robustness analysis. The analysis takes into account the effect of the disturbance and reference signal influence on the system, generally overlooked in DF analysis. Guidelines about how to perform the robustness analysis are given, showing their application through some study cases.Publicación Cloud Implementation of Extreme Learning Machine for Hyperspectral Image Classification(IEEE, 2023) Haut, Juan M.; Moreno Álvarez, Sergio; Moreno Ávila, Enrique; Ayma Quirita, Victor Andrés; Pastor Vargas, Rafael; Paoletti, Mercedes Eugenia; https://orcid.org/0000-0001-6701-961X; https://orcid.org/0000-0003-2987-2761; https://orcid.org/0000-0002-4089-9538; https://orcid.org/0000-0003-1030-3729Classifying remotely sensed hyperspectral images (HSIs) became a computationally demanding task given the extensive information contained throughout the spectral dimension. Furthermore, burgeoning data volumes compound inherent computational and storage challenges for data processing and classification purposes. Given their distributed processing capabilities, cloud environments have emerged as feasible solutions to handle these hurdles. This encourages the development of innovative distributed classification algorithms that take full advantage of the processing capabilities of such environments. Recently, computational-efficient methods have been implemented to boost network convergence by reducing the required training calculations. This letter develops a novel cloud-based distributed implementation of the extreme learning machine ( CC-ELM ) algorithm for efficient HSI classification. The proposal implements a fault-tolerant and scalable computing design while avoiding traditional batch-based backpropagation. CC-ELM has been evaluated over state-of-the-art HSI classification benchmarks, yielding promising results and proving the feasibility of cloud environments for large remote sensing and HSI data volumes processing. The code available at https://github.com/mhaut/scalable-ELM-HSIPublicación Cloud-Based Analysis of Large-Scale Hyperspectral Imagery for Oil Spill Detection(IEEE, 2024) Haut, Juan M.; Moreno Álvarez, Sergio; Pastor Vargas, Rafael; Pérez García, Ámbar; Paoletti, Mercedes Eugenia; https://orcid.org/0000-0001-6701-961X; https://orcid.org/0000-0002-4089-9538; https://orcid.org/0000-0002-2943-6348; https://orcid.org/0000-0003-1030-3729Spectral indices are of fundamental importance in providing insights into the distinctive characteristics of oil spills, making them indispensable tools for effective action planning. The normalized difference oil index (NDOI) is a reliable metric and suitable for the detection of coastal oil spills, effectively leveraging the visible and near-infrared (VNIR) spectral bands offered by commercial sensors. The present study explores the calculation of NDOI with a primary focus on leveraging remotely sensed imagery with rich spectral data. This undertaking necessitates a robust infrastructure to handle and process large datasets, thereby demanding significant memory resources and ensuring scalability. To overcome these challenges, a novel cloud-based approach is proposed in this study to conduct the distributed implementation of the NDOI calculation. This approach offers an accessible and intuitive solution, empowering developers to harness the benefits of cloud platforms. The evaluation of the proposal is conducted by assessing its performance using the scene acquired by the airborne visible infrared imaging spectrometer (AVIRIS) sensor during the 2010 oil rig disaster in the Gulf of Mexico. The catastrophic nature of the event and the subsequent challenges underscore the importance of remote sensing (RS) in facilitating decision-making processes. In this context, cloud-based approaches have emerged as a prominent technological advancement in the RS field. The experimental results demonstrate noteworthy performance by the proposed cloud-based approach and pave the path for future research for fast decision-making applications in scalable environments.Publicación Comparing fusion techniques for the ImageCLEF 2013 medical case retrieval task(Elsevier, 2014-03-27) García Seco de Herrera, Alba; Roger Schaer; Dimitrios Markonis; Henning MüllerRetrieval systems can supply similar cases with a proven diagnosis to a new example case under observation to help clinicians during their work. The ImageCLEFmed evaluation campaign proposes a framework where research groups can compare case–based retrieval approaches. This paper focuses on the case–based task and adds results of the compound figure separation and modality classification tasks. Several fusion approaches are compared to identify the approaches best adapted to the heterogeneous data of the task. Fusion of visual and textual features is analyzed, demonstrating that the selection of the fusion strategy can improve the best performance on the case–based retrieval task.Publicación A Comprehensive Survey of Imbalance Correction Techniques for Hyperspectral Data Classification(IEEE, 2023) Paoletti, Mercedes Eugenia; Mogollón Gutiérrez, Óscar; Moreno Álvarez, Sergio; Sancho, José Carlos; Haut, Juan M.; https://orcid.org/0000-0003-1030-3729; https://orcid.org/0000-0003-2980-9236; https://orcid.org/0000-0002-4584-6945; https://orcid.org/0000-0001-6701-961XLand-cover classification is an important topic for remotely sensed hyperspectral (HS) data exploitation. In this regard, HS classifiers have to face important challenges, such as the high spectral redundancy, as well as noise, present in the data, and the fact that obtaining accurate labeled training data for supervised classification is expensive and time-consuming. As a result, the availability of large amounts of training samples, needed to alleviate the so-called Hughes phenomenon, is often unfeasible in practice. The class-imbalance problem, which results from the uneven distribution of labeled samples per class, is also a very challenging factor for HS classifiers. In this article, a comprehensive review of oversampling techniques is provided, which mitigate the aforementioned issues by generating new samples for the minority classes. More specifically, this article pursues a twofold objective. First, it reviews the most relevant oversampling methods that can be adopted according to the nature of HS data. Second, it provides a comprehensive experimental study and comparison, which are useful to derive practical conclusions about the performance of oversampling techniques in different HS image-based applications.Publicación A Data-Driven Approach to Engineering Instruction: Exploring Learning Styles, Study Habits, and Machine Learning(IEEE Xplore, 2025-01-10) Isaza Domínguez, Lauren Genith; Robles Gómez, Antonio; Pastor Vargas, RafaelThis study examined the impact of learning style and study habit alignment on the academic success of engineering students. Over a 16-week semester, 72 students from Process Engineering and Electronic Engineering programs at the Universidad de Los Llanos participated in this study. They completed the Learning Styles Index questionnaire on the first day of class, and each week, teaching methods and class activities were aligned with one of the four learning dimensions of the Felder-Silverman Learning Styles Model. Lesson 1 focused on one side of a learning dimension, lesson 2 on the opposite side, and the tutorial session incorporated both. Quizzes and engagement surveys assessed short-term academic performance, whereas midterm and final exam results measured long-term performance. Paired t-tests, Cohen’s effect size, and two-way ANOVA showed that aligning teaching methods with learning styles improved students’short-term exam scores and engagement. However, multiple regression analysis indicated that study habits (specifically time spent studying, frequency, and scores on a custom-developed study quality survey) were much stronger predictors of midterm and final exam performance. Several machine learning models, including Random Forest and Voting Ensemble, were tested to predict academic performance using study behavior data. Voting Ensemble was found to be the strongest model, explaining 83% of the variance in final exam scores, with a mean absolute error of 3.18. Our findings suggest that, while learning style alignment improves short-term engagement and comprehension, effective study habits and time management play a more important role in long-term academic success.Publicación Deep mixed precision for hyperspectral image classification(Springer, 2021-02-03) Paoletti, Mercedes Eugenia; X. Tao; Haut, Juan Mario; Moreno Álvarez, Sergio; Plaza, Antonio; https://orcid.org/0000-0003-1030-3729; https://orcid.org/0000-0001-6701-961X; https://orcid.org/0000-0002-9613-1659Hyperspectral images (HSIs) record scenes at different wavelength channels, providing detailed spatial and spectral information. How to storage and process this highdimensional data plays a vital role in many practical applications, where classification technologies have emerged as excellent processing tools. However, their high computational complexity and energy requirements bring some challenges. Adopting low-power consumption architectures and deep learning (DL) approaches has to provide acceptable computing capabilities without reducing accuracy demand. However, most DL architectures employ single-precision (FP32) to train models, and some big DL architectures will have a limitation on memory and computation resources. This can negatively affect the network learning process. This letter leads these challenges by using mixed precision into DL architectures for HSI classification to speed up the training process and reduce the memory consumption/access. Proposed models are evaluated on four widely used data sets. Also, low and highpower consumption devices are compared, considering NVIDIA Jetson Xavier and Titan RTX GPUs, to evaluate the proposal viability in on-board processing devices. Obtained results demonstrate the efficiency and effectiveness of these models within HSI classification task for both devices. Source codes: https ://githu b.com/mhaut / CNN-MP-HSI.Publicación Deep shared proxy construction hashing for cross-modal remote sensing image fast target retrieval(ELSEVIER, 2024) han, lirong; Paoletti, Mercedes Eugenia; Moreno Álvarez, Sergio; Haut, Juan M.; Plaza, Antonio; https://orcid.org/0000-0002-8613-7037; https://orcid.org/0000-0003-1030-3729; https://orcid.org/0000-0001-6701-961X; https://orcid.org/0000-0002-9613-1659The diversity of remote sensing (RS) image modalities has expanded alongside advancements in RS technologies. A plethora of optical, multispectral, and hyperspectral RS images offer rich geographic class information. The ability to swiftly access multiple RS image modalities is crucial for fully harnessing the potential of RS imagery. In this work, an innovative method, called Deep Shared Proxy Construction Hashing (DSPCH), is introduced for cross-modal hyperspectral scene target retrieval using accessible RS images such as optical and sketch. Initially, a shared proxy hash code is generated in the hash space for each land use class. Subsequently, an end-to-end deep hash network is built to generate hash codes for hyperspectral pixels and accessible RS images. Furthermore, a proxy hash loss function is designed to optimize the proposed deep hashing network, aiming to generate hash codes that closely resemble the corresponding proxy hash code. Finally, two benchmark datasets are established for cross-modal hyperspectral and accessible RS image retrieval, allowing us to conduct extensive experiments with these datasets. Our experimental results validate that the novel DSPCH method can efficiently and effectively achieve RS image cross-modal target retrieval, opening up new avenues in the field of cross-modal RS image retrievalPublicación Design and Development of an SVM-Powered Underwater Acoustic Modem(MDPI, 2024-05-05) Guerrero Chilaber, Gabriel S.; Moreno Salinas, David; Sánchez Moreno, José; https://orcid.org/0009-0009-9959-0250Underwater acoustic communication is fraught with challenges, including signal distortion, noise, and interferences unique to aquatic environments. This study aimed to advance the field by developing a novel underwater modem system that utilizes machine learning for signal classification, enhancing the reliability and clarity of underwater transmissions. This research introduced a system architecture incorporating a Lattice Semiconductors FPGA for signal modulation and a half-pipe waveguide to emulate the underwater environment. For signal classification, support vector machines (SVMs) were leveraged with the continuous wavelet transform (CWT) employed for feature extraction from acoustic signals. Comparative analysis with traditional signal processing techniques highlighted the efficacy of the CWT in this context. The experiments and tests carried out with the system demonstrated superior performance in classifying modulated signals under simulated underwater conditions, with the SVM providing a robust classification despite the presence of noise. The use of the CWT for feature extraction significantly enhanced the model’s accuracy, eliminating the need for further dimensionality reduction. Therefore, the integration of machine learning with advanced signal processing techniques presents a promising research line for overcoming the complexities of underwater acoustic communication. The findings underscore the potential of data mining methodologies to improve signal clarity and transmission reliability in aquatic environments.Publicación Designing an effective semantic fluency test for early MCI diagnosis with machine learning(Elsevier, 2024-08-16) Gómez-Valades Batanero, Alba; Martínez Tomás, Rafael; Rincón Zamorano, MarianoSemantic fluency tests are one of the key tests used in batteries for the early detection of Mild Cognitive Impairment (MCI) as the impairment in speech and semantic memory are among the first symptoms, attracting the attention of a large number of studies. Several new semantic categories and variables capable of providing complementary information of clinical interest have been proposed to increase their effectiveness. However, this also extends the time required to complete all tests and get the overall diagnosis. Therefore, there is a need to reduce the number of tests in the batteries and thus the time spent on them while maintaining or increasing their effectiveness. This study used machine learning methods to determine the smallest and most efficient combination of semantic categories and variables to achieve this goal. We utilized a database containing 423 assessments from 141 subjects, with each subject having undergone three assessments spaced approximately one year apart. Subjects were categorized into three diagnostic groups: Healthy (if diagnosed as healthy in all three assessments), stable MCI (consistently diagnosed as MCI), and heterogeneous MCI (when exhibiting alternations between healthy and MCI diagnoses across assessments). We obtained that the most efficient combination to distinguish between these categories of semantic fluency tests included the animals and clothes semantic categories with the variables corrects, switching, clustering, and total clusters. This combination is ideal for scenarios that require a balance between time efficiency and diagnosis capability, such as population-based screenings.Publicación Discovering HIV related information by means of association rules and machine learning(Nature Research, 2022-10-22) Araujo Serna, M. Lourdes; Martínez Romo, Juan; Bisbal, Otilia; Sanchez de Madariaga, Ricardo; The Cohort of the National AIDS Network (CoRIS); https://orcid.org/0000-0003-3746-3378Acquired immunodeficiency syndrome (AIDS) is still one of the main health problems worldwide. It is therefore essential to keep making progress in improving the prognosis and quality of life of affected patients. One way to advance along this pathway is to uncover connections between other disorders associated with HIV/AIDS-so that they can be anticipated and possibly mitigated. We propose to achieve this by using Association Rules (ARs). They allow us to represent the dependencies between a number of diseases and other specific diseases. However, classical techniques systematically generate every AR meeting some minimal conditions on data frequency, hence generating a vast amount of uninteresting ARs, which need to be filtered out. The lack of manually annotated ARs has favored unsupervised filtering, even though they produce limited results. In this paper, we propose a semi-supervised system, able to identify relevant ARs among HIV-related diseases with a minimal amount of annotated training data. Our system has been able to extract a good number of relationships between HIV-related diseases that have been previously detected in the literature but are scattered and are often little known. Furthermore, a number of plausible new relationships have shown up which deserve further investigation by qualified medical experts.Publicación Distributed Deep Learning for Remote Sensing Data Interpretation(IEEE, 2021-03-15) Haut, Juan Mario; Paoletti, Mercedes Eugenia; Moreno Álvarez, Sergio; Plaza, Javier; Rico Gallego, Juan Antonio; Plaza, Antonio; https://orcid.org/0000-0001-6701-961X; https://orcid.org/0000-0003-1030-3729; https://orcid.org/0000-0002-2384-9141; https://orcid.org/0000-0002-4264-7473; https://orcid.org/0000-0002-9613-1659As a newly emerging technology, deep learning (DL) is a very promising field in big data applications. Remote sensing often involves huge data volumes obtained daily by numerous in-orbit satellites. This makes it a perfect target area for data-driven applications. Nowadays, technological advances in terms of software and hardware have a noticeable impact on Earth observation applications, more specifically in remote sensing techniques and procedures, allowing for the acquisition of data sets with greater quality at higher acquisition ratios. This results in the collection of huge amounts of remotely sensed data, characterized by their large spatial resolution (in terms of the number of pixels per scene), and very high spectral dimensionality, with hundreds or even thousands of spectral bands. As a result, remote sensing instruments on spaceborne and airborne platforms are now generating data cubes with extremely high dimensionality, imposing several restrictions in terms of both processing runtimes and storage capacity. In this article, we provide a comprehensive review of the state of the art in DL for remote sensing data interpretation, analyzing the strengths and weaknesses of the most widely used techniques in the literature, as well as an exhaustive description of their parallel and distributed implementations (with a particular focus on those conducted using cloud computing systems). We also provide quantitative results, offering an assessment of a DL technique in a specific case study (source code available: https://github.com/mhaut/cloud-dnn-HSI). This article concludes with some remarks and hints about future challenges in the application of DL techniques to distributed remote sensing data interpretation problems. We emphasize the role of the cloud in providing a powerful architecture that is now able to manage vast amounts of remotely sensed data due to its implementation simplicity, low cost, and high efficiency compared to other parallel and distributed architectures, such as grid computing or dedicated clusters.Publicación Distributed multi-UAV shield formation based on virtual surface constraints(Elsevier, 2024-03-30) Zaragoza, Salvador ; Guinaldo Losada, María; Sánchez Moreno, José; Mañas Álvarez, Francisco JoséThis paper proposes a method for the deployment of a multi-agent system of unmanned aerial vehicles (UAVs) as a shield with potential applications in the protection of infrastructures. The shield shape is modeled as a quadric surface in the 3D space. To design the desired formation (target distances between agents and interconnections), an algorithm is proposed where the input parameters are just the parametrization of the quadric and the number of agents of the system. This algorithm guarantees that the agents are almost uniformly distributed over the virtual surface and that the topology is a Delaunay triangulation. Moreover, a new method is proposed to check if the resulting triangulation meets that condition and is executed locally. Because this topology ensures that the formation is rigid, a distributed control law based on the gradient of a potential function is proposed to acquire the desired shield shape and proofs of stability are provided. Finally, simulation and experimental results illustrate the effectiveness of the proposed approach.