Persona:
Moreno Álvarez, Sergio

Cargando...
Foto de perfil
Dirección de correo electrónico
ORCID
0000-0002-1858-9920
Fecha de nacimiento
Proyectos de investigación
Unidades organizativas
Puesto de trabajo
Apellidos
Moreno Álvarez
Nombre de pila
Sergio
Nombre

Resultados de la búsqueda

Mostrando 1 - 10 de 15
  • Publicación
    Deep shared proxy construction hashing for cross-modal remote sensing image fast target retrieval
    (ELSEVIER, 2024) han, lirong; Paoletti, Mercedes Eugenia; Moreno Álvarez, Sergio; Haut, Juan M.; Plaza, Antonio; https://orcid.org/0000-0002-8613-7037; https://orcid.org/0000-0003-1030-3729; https://orcid.org/0000-0001-6701-961X; https://orcid.org/0000-0002-9613-1659
    The diversity of remote sensing (RS) image modalities has expanded alongside advancements in RS technologies. A plethora of optical, multispectral, and hyperspectral RS images offer rich geographic class information. The ability to swiftly access multiple RS image modalities is crucial for fully harnessing the potential of RS imagery. In this work, an innovative method, called Deep Shared Proxy Construction Hashing (DSPCH), is introduced for cross-modal hyperspectral scene target retrieval using accessible RS images such as optical and sketch. Initially, a shared proxy hash code is generated in the hash space for each land use class. Subsequently, an end-to-end deep hash network is built to generate hash codes for hyperspectral pixels and accessible RS images. Furthermore, a proxy hash loss function is designed to optimize the proposed deep hashing network, aiming to generate hash codes that closely resemble the corresponding proxy hash code. Finally, two benchmark datasets are established for cross-modal hyperspectral and accessible RS image retrieval, allowing us to conduct extensive experiments with these datasets. Our experimental results validate that the novel DSPCH method can efficiently and effectively achieve RS image cross-modal target retrieval, opening up new avenues in the field of cross-modal RS image retrieval
  • Publicación
    Hashing for Retrieving Long-Tailed Distributed Remote Sensing Images
    (IEEE, 2024) han, lirong; Paoletti, Mercedes Eugenia; Moreno Álvarez, Sergio; Haut, Juan M.; Pastor Vargas, Rafael; Plaza, Antonio; https://orcid.org/0000-0002-8613-7037; https://orcid.org/0000-0003-1030-3729; https://orcid.org/0000-0001-6701-961X; https://orcid.org/0000-0002-4089-9538; https://orcid.org/0000-0002-9613-1659
    The widespread availability of remotely sensed datasets establishes a cornerstone for comprehensive image retrieval within the realm of remote sensing (RS). In response, the investigation into hashing-driven retrieval methods garners significance, enabling proficient image acquisition within such extensive data magnitudes. Nevertheless, the used datasets in practical applications are invariably less desirable and with long-tailed distribution. The primary hurdle pertains to the substantial discrepancy in class volumes. Moreover, commonly utilized RS datasets for hashing tasks encompass approximately two–three dozen classes. However, real-world datasets exhibit a randomized number of classes, introducing a challenging variability. This article proposes a new centripetal intensive attention hashing (CIAH) mechanism based on intensive attention features for long-tailed distribution RS image retrieval. Specifically, an intensive attention module (IAM) is adopted to enhance the significant features to facilitate the subsequent generation of representative hash codes. Furthermore, to deal with the inherent imbalance of long-tailed distributed datasets, the utilization of a centripetal loss function is introduced. This endeavor constitutes the inaugural effort toward long-tailed distributed RS image retrieval. In pursuit of this objective, a collection of long-tail datasets is meticulously curated using four widely recognized RS datasets, subsequently disseminated as benchmark datasets. The selected fundamental datasets contain 7, 25, 38, and 45 land-use classes to mimic different real RS datasets. Conducted experiments demonstrate that the proposed methodology attains a performance benchmark that surpasses currently existing methodologies.
  • Publicación
    Cloud-Based Analysis of Large-Scale Hyperspectral Imagery for Oil Spill Detection
    (IEEE, 2024) Haut, Juan M.; Moreno Álvarez, Sergio; Pastor Vargas, Rafael; Pérez García, Ámbar; Paoletti, Mercedes Eugenia; https://orcid.org/0000-0001-6701-961X; https://orcid.org/0000-0002-4089-9538; https://orcid.org/0000-0002-2943-6348; https://orcid.org/0000-0003-1030-3729
    Spectral indices are of fundamental importance in providing insights into the distinctive characteristics of oil spills, making them indispensable tools for effective action planning. The normalized difference oil index (NDOI) is a reliable metric and suitable for the detection of coastal oil spills, effectively leveraging the visible and near-infrared (VNIR) spectral bands offered by commercial sensors. The present study explores the calculation of NDOI with a primary focus on leveraging remotely sensed imagery with rich spectral data. This undertaking necessitates a robust infrastructure to handle and process large datasets, thereby demanding significant memory resources and ensuring scalability. To overcome these challenges, a novel cloud-based approach is proposed in this study to conduct the distributed implementation of the NDOI calculation. This approach offers an accessible and intuitive solution, empowering developers to harness the benefits of cloud platforms. The evaluation of the proposal is conducted by assessing its performance using the scene acquired by the airborne visible infrared imaging spectrometer (AVIRIS) sensor during the 2010 oil rig disaster in the Gulf of Mexico. The catastrophic nature of the event and the subsequent challenges underscore the importance of remote sensing (RS) in facilitating decision-making processes. In this context, cloud-based approaches have emerged as a prominent technological advancement in the RS field. The experimental results demonstrate noteworthy performance by the proposed cloud-based approach and pave the path for future research for fast decision-making applications in scalable environments.
  • Publicación
    Enhancing Distributed Neural Network Training Through Node-Based Communications
    (IEEE, 2023) Moreno Álvarez, Sergio; Paoletti, Mercedes Eugenia; Cavallaro, Gabriele; Haut, Juan M.; https://orcid.org/0000-0003-1030-3729; https://orcid.org/0000-0002-3239-9904; https://orcid.org/0000-0001-6701-961X
    The amount of data needed to effectively train modern deep neural architectures has grown significantly, leading to increased computational requirements. These intensive computations are tackled by the combination of last generation computing resources, such as accelerators, or classic processing units. Nevertheless, gradient communication remains as the major bottleneck, hindering the efficiency notwithstanding the improvements in runtimes obtained through data parallelism strategies. Data parallelism involves all processes in a global exchange of potentially high amount of data, which may impede the achievement of the desired speedup and the elimination of noticeable delays or bottlenecks. As a result, communication latency issues pose a significant challenge that profoundly impacts the performance on distributed platforms. This research presents node-based optimization steps to significantly reduce the gradient exchange between model replicas whilst ensuring model convergence. The proposal serves as a versatile communication scheme, suitable for integration into a wide range of general-purpose deep neural network (DNN) algorithms. The optimization takes into consideration the specific location of each replica within the platform. To demonstrate the effectiveness, different neural network approaches and datasets with disjoint properties are used. In addition, multiple types of applications are considered to demonstrate the robustness and versatility of our proposal. The experimental results show a global training time reduction whilst slightly improving accuracy. Code: https://github.com/mhaut/eDNNcomm.
  • Publicación
    Federated learning meets remote sensing
    (ELSEVIER, 2024-12-01) Moreno Álvarez, Sergio; Paoletti, Mercedes Eugenia; Sanchez Fernandez, Andres J.; Rico Gallego, Juan Antonio; han, lirong; Haut, Juan M.; https://orcid.org/0000-0003-1030-3729; https://orcid.org/0000-0001-6743-3570; https://orcid.org/0000-0002-4264-7473; https://orcid.org/0000-0002-8613-7037; https://orcid.org/0000-0001-6701-961X
    Remote sensing (RS) imagery provides invaluable insights into characterizing the Earth’s land surface within the scope of Earth observation (EO). Technological advances in capture instrumentation, coupled with the rise in the number of EO missions aimed at data acquisition, have significantly increased the volume of accessible RS data. This abundance of information has alleviated the challenge of insufficient training samples, a common issue in the application of machine learning (ML) techniques. In this context, crowd-sourced data play a crucial role in gathering diverse information from multiple sources, resulting in heterogeneous datasets that enable applications to harness a more comprehensive spatial coverage of the surface. However, the sensitive nature of RS data requires ensuring the privacy of the complete collection. Consequently, federated learning (FL) emerges as a privacy-preserving solution, allowing collaborators to combine such information from decentralized private data collections to build efficient global models. This paper explores the convergence between the FL and RS domains, specifically in developing data classifiers. To this aim, an extensive set of experiments is conducted to analyze the properties and performance of novel FL methodologies. The main emphasis is on evaluating the influence of such heterogeneous and disjoint data among collaborating clients. Moreover, scalability is evaluated for a growing number of clients, and resilience is assessed against Byzantine attacks. Finally, the work concludes with future directions and serves as the opening of a new research avenue for developing efficient RS applications under the FL paradigm. The source code is publicly available at https://github.com/hpc-unex/FLmeetsRS.
  • Publicación
    Hyperspectral Image Analysis Using Cloud-Based Support Vector Machines
    (Springer, 2024) Haut, Juan M.; Franco Valiente, José M.; Paoletti, Mercedes Eugenia; Moreno Álvarez, Sergio; Pardo-Diaz, Alfonso; https://orcid.org/0000-0001-6701-961X; https://orcid.org/0000-0002-3880-6697; https://orcid.org/0000-0003-1030-3729
    Hyperspectral image processing techniques involve time-consuming calculations due to the large volume and complexity of the data. Indeed, hyperspectral scenes contain a wealth of spatial and spectral information thanks to the hundreds of narrow and continuous bands collected across the electromagnetic spectrum. Predictive models, particularly supervised machine learning classifiers, take advantage of this information to predict the pixel categories of images through a training set of real observations. Most notably, the Support Vector Machine (SVM) has demonstrate impressive accuracy results for image classification. Notwithstanding the performance offered by SVMs, dealing with such a large volume of data is computationally challenging. In this paper, a scalable and high-performance cloud-based approach for distributed training of SVM is proposed. The proposal address the overwhelming amount of remote sensing (RS) data information through a parallel training allocation. The implementation is performed over a memory-efficient Apache Spark distributed environment. Experiments are performed on a benchmark of real hyperspectral scenes to show the robustness of the proposal. Obtained results demonstrate efficient classification whilst optimising data processing in terms of training times.
  • Publicación
    Optimizing Distributed Deep Learning in Heterogeneous Computing Platforms for Remote Sensing Data Classification
    (IEEE, 2022) Moreno Álvarez, Sergio; Paoletti, Mercedes Eugenia; Rico Gallego, Juan Antonio; Cavallaro, Gabriele; Haut, Juan M.; https://orcid.org/0000-0003-1030-3729; https://orcid.org/0000-0002-4264-7473; https://orcid.org/0000-0002-3239-9904; https://orcid.org/0000-0001-6701-961X
    Applications from Remote Sensing (RS) unveiled unique challenges to Deep Learning (DL) due to the high volume and complexity of their data. On the one hand, deep neural network architectures have the capability to automatically ex-tract informative features from RS data. On the other hand, these models have massive amounts of tunable parameters, re-quiring high computational capabilities. Distributed DL with data parallelism on High-Performance Computing (HPC) sys-tems have proved necessary in dealing with the demands of DL models. Nevertheless, a single HPC system can be al-ready highly heterogeneous and include different computing resources with uneven processing power. In this context, a standard data parallelism strategy does not partition the data efficiently according to the available computing resources. This paper proposes an alternative approach to compute the gradient, which guarantees that the contribution to the gradi-ent calculation is proportional to the processing speed of each DL model's replica. The experimental results are obtained in a heterogeneous HPC system with RS data and demon-strate that the proposed approach provides a significant training speed up and gain in the global accuracy compared to one of the state-of-the-art distributed DL framework.
  • Publicación
    Multiple Attention-Guided Capsule Networks for Hyperspectral Image Classification
    (IEEE, 2022) Paoletti, Mercedes Eugenia; Moreno Álvarez, Sergio; Haut, Juan M.; https://orcid.org/0000-0003-1030-3729; https://orcid.org/0000-0001-6701-961X
    The profound impact of deep learning and particularly of convolutional neural networks (CNNs) in automatic image processing has been decisive for the progress and evolution of remote sensing (RS) hyperspectral imaging (HSI) processing. Indeed, CNNs have stated themselves as the current state of the art, reaching unparalleled results in HSI classification. However, most CNNs were designed for RGB images, and their direct application to HSI data analysis could lead to nonoptimal solutions. Moreover, CNNs perform classification based on the identification of specific features, neglecting the spatial relationships between different features (i.e., their arrangement) due to pooling techniques. The capsule network (CapsNet) architecture is an attempt to overcome this drawback by nesting several neural layers within a capsule, connected by dynamic routing, both to identify not only the presence of a feature but also its instantiation parameters and to learn the relationships between different features. Although this mechanism improves the data representations, enhancing the classification of HSI data, it still acts as a black box, without control of the most relevant features for classification purposes. Indeed, important features could be discriminated against. In this article, a new multiple attention-guided CapsNet is proposed to improve feature processing for RS-HSIs’ classification, both to improve computational efficiency (in terms of parameters) and increase accuracy. Hence, the most representative visual parts of the images are identified using a detailed feature extractor coupled with attention mechanisms. Extensive experimental results have been obtained on five real datasets, demonstrating the great potential of the proposed method compared to other state-of-the-art classifiers.
  • Publicación
    Remote Sensing Image Classification Using CNNs With Balanced Gradient for Distributed Heterogeneous Computing
    (IEEE, 2022) Moreno Álvarez, Sergio; Paoletti, Mercedes Eugenia; Cavallaro, Gabriele; Rico Gallego, Juan Antonio; Haut, Juan M.; https://orcid.org/0000-0003-1030-3729; https://orcid.org/0000-0002-3239-9904; https://orcid.org/0000-0002-4264-7473; https://orcid.org/0000-0001-6701-961X
    Land-cover classification methods are based on the processing of large image volumes to accurately extract representative features. Particularly, convolutional models provide notable characterization properties for image classification tasks. Distributed learning mechanisms on high-performance computing platforms have been proposed to speed up the processing, while achieving an efficient feature extraction. High-performance computing platforms are commonly composed of a combination of central processing units (CPUs) and graphics processing units (GPUs) with different computational capabilities. As a result, current homogeneous workload distribution techniques for deep learning (DL) become obsolete due to their inefficient use of computational resources. To address this, new computational balancing proposals, such as heterogeneous data parallelism, have been implemented. Nevertheless, these techniques should be improved to handle the peculiarities of working with heterogeneous data workloads in the training of distributed DL models. The objective of handling heterogeneous workloads for current platforms motivates the development of this work. This letter proposes an innovative heterogeneous gradient calculation applied to land-cover classification tasks through convolutional models, considering the data amount assigned to each device in the platform while maintaining the acceleration. Extensive experimentation has been conducted on multiple datasets, considering different deep models on heterogeneous platforms to demonstrate the performance of the proposed methodology.
  • Publicación
    AAtt-CNN: Automatic Attention-Based Convolutional Neural Networks for Hyperspectral Image Classification
    (IEEE, 2023) Paoletti, Mercedes Eugenia; Moreno Álvarez, Sergio; xue, yu; Haut, Juan M.; Plaza, Antonio; https://orcid.org/0000-0003-1030-3729; https://orcid.org/0000-0002-9069-7547; https://orcid.org/0000-0001-6701-961X; https://orcid.org/0000-0002-9613-1659
    Convolutional models have provided outstanding performance in the analysis of hyperspectral images (HSIs). These architectures are carefully designed to extract intricate information from nonlinear features for classification tasks. Notwithstanding their results, model architectures are manually engineered and further optimized for generalized feature extraction. In general terms, deep architectures are time-consuming for complex scenarios, since they require fine-tuning. Neural architecture search (NAS) has emerged as a suitable approach to tackle this shortcoming. In parallel, modern attention-based methods have boosted the recognition of sophisticated features. The search for optimal neural architectures combined with attention procedures motivates the development of this work. This article develops a new method to automatically design and optimize convolutional neural networks (CNNs) for HSI classification using channel-based attention mechanisms. Specifically, 1-D and spectral–spatial (3-D) classifiers are considered to handle the large amount of information contained in HSIs from different perspectives. Furthermore, the proposed automatic attention-based CNN ( AAtt-CNN ) method meets the requirement to lower the large computational overheads associated with architectural search. It is compared with current state-of-the-art (SOTA) classifiers. Our experiments, conducted using a wide range of HSI images, demonstrate that AAtt-CNN succeeds in finding optimal architectures for classification, leading to SOTA results.