Persona:
Moreno Álvarez, Sergio

Cargando...
Foto de perfil
Dirección de correo electrónico
ORCID
0000-0002-1858-9920
Fecha de nacimiento
Proyectos de investigación
Unidades organizativas
Puesto de trabajo
Apellidos
Moreno Álvarez
Nombre de pila
Sergio
Nombre

Resultados de la búsqueda

Mostrando 1 - 10 de 12
  • Publicación
    Analytical Communication Performance Models as a metric in the partitioning of data-parallel kernels on heterogeneous platforms
    (Springer, 2019) Rico Gallego, Juan Antonio; Díaz Martín, Juan Carlos; Calvo Jurado, Carmen; Moreno Álvarez, Sergio; García Zapata, Juan Luis; https://orcid.org/0000-0002-4264-7473; https://orcid.org/0000-0002-8435-3844; https://orcid.org/0000-0001-9842-081X; https://orcid.org/0000-0003-1419-1672
    Data partitioning on heterogeneous HPC platforms is formulated as an optimization problem. The algorithm departs from the communication performance models of the processes representing their speeds and outputs a data tiling that minimizes the communication cost. Traditionally, communication volume is the metric used to guide the partitioning, but such metric is unable to capture the complexities introduced by uneven communication channels and the variety of patterns in the kernel communications. We discuss Analytical Communication Performance Models as a new metric in partitioning algorithms. They have not been considered in the past because of two reasons: prediction inaccuracy and lack of tools to automatically build and solve kernel communication formal expressions. We show how communication performance models fit the specific kernel and platform, and we present results that equal or even improve previous volume-based strategies.
  • Publicación
    Distributed Deep Learning for Remote Sensing Data Interpretation
    (IEEE, 2021-03-15) Haut, Juan Mario; Paoletti, Mercedes Eugenia; Moreno Álvarez, Sergio; Plaza, Javier; Rico Gallego, Juan Antonio; Plaza, Antonio; https://orcid.org/0000-0001-6701-961X; https://orcid.org/0000-0003-1030-3729; https://orcid.org/0000-0002-2384-9141; https://orcid.org/0000-0002-4264-7473; https://orcid.org/0000-0002-9613-1659
    As a newly emerging technology, deep learning (DL) is a very promising field in big data applications. Remote sensing often involves huge data volumes obtained daily by numerous in-orbit satellites. This makes it a perfect target area for data-driven applications. Nowadays, technological advances in terms of software and hardware have a noticeable impact on Earth observation applications, more specifically in remote sensing techniques and procedures, allowing for the acquisition of data sets with greater quality at higher acquisition ratios. This results in the collection of huge amounts of remotely sensed data, characterized by their large spatial resolution (in terms of the number of pixels per scene), and very high spectral dimensionality, with hundreds or even thousands of spectral bands. As a result, remote sensing instruments on spaceborne and airborne platforms are now generating data cubes with extremely high dimensionality, imposing several restrictions in terms of both processing runtimes and storage capacity. In this article, we provide a comprehensive review of the state of the art in DL for remote sensing data interpretation, analyzing the strengths and weaknesses of the most widely used techniques in the literature, as well as an exhaustive description of their parallel and distributed implementations (with a particular focus on those conducted using cloud computing systems). We also provide quantitative results, offering an assessment of a DL technique in a specific case study (source code available: https://github.com/mhaut/cloud-dnn-HSI). This article concludes with some remarks and hints about future challenges in the application of DL techniques to distributed remote sensing data interpretation problems. We emphasize the role of the cloud in providing a powerful architecture that is now able to manage vast amounts of remotely sensed data due to its implementation simplicity, low cost, and high efficiency compared to other parallel and distributed architectures, such as grid computing or dedicated clusters.
  • Publicación
    Evaluación de Rendimiento del Entrenamiento Distribuido de Redes Neuronales Profundas en Plataformas Heterogéneas
    (Universidad de Extremadura, 2019) Moreno Álvarez, Sergio; Paoletti, Mercedes Eugenia; Haut, Juan Mario; Rico Gallego, Juan Antonio; Plaza, Javier; Díaz Martín, Juan Carlos; Vega Rodriguez, Miguel ángel; Plaza Miguel, Antonio J.; https://orcid.org/0000-0003-1030-3729; https://orcid.org/0000-0001-6701-961X; https://orcid.org/0000-0002-4264-7473; https://orcid.org/0000-0002-8908-1606; https://orcid.org/0000-0002-8435-3844
    Asynchronous stochastic gradient descent es una tecnica de optimizacion comunmente utilizada en el entrenamiento distribuido de redes neuronales profundas. En distribuciones basadas en particionamiento de datos, se entrena una replica del modelo en cada unidad de procesamiento de la plataforma, utilizando conjuntos de muestras denominados mini-batches. Este es un proceso iterativo en el que al nal de cada mini-batch, las replicas combinan los gradientes calculados para actualizar su copia local de los parametros. Sin embargo, al utilizar asincronismo, las diferencias en el tiempo de entrenamiento por iteracion entre replicas provocan la aparicion del staleness, esto es, las replicas progresan a diferente velocidad y en el entrenamiento de cada replica se utiliza una vers on no actualizada de los parametros. Un alto gradde staleness tiene un impacto negativo en la precision del modelo resultante. Ademas, las plataformas de computacion de alto rendimiento suelen ser heterogeneas, compuestas por CPUs y GPUs de diferentes capacidades, lo que agrava el problema de staleness. En este trabajo, se propone aplicar t ecnicas de equilibrio de carga computacional, bien conocidas en el campo de la Computaci on de Altas Prestaciones, al entrenamiento distribuido de modelos profundos. A cada r eplica se asignar a un n umero de mini-batches en proporci on a su velocidad relativa. Los resultados experimentales obtenidos en una plataforma hete-rog enea muestran que, si bien la precisi on se mantiene constante, el rendimiento del entrenamiento aumenta considerablemente, o desde otro punto de vista, en el mismo tiempo de entrenamiento, se alcanza una mayor precisi on en las estimaciones del modelo. Discutimos las causas de tal incremento en el rendimiento y proponemos los pr oximos pasos para futuras investigaciones.
  • Publicación
    Performance evaluation of model-driven partitioning algorithms for data-parallel kernels on heterogeneous platforms
    (Wiley, 2019) Rico Gallego, Juan Antonio; Díaz Martín, Juan Carlos; Moreno Álvarez, Sergio; Calvo Jurado, Carmen; García Zapata, Juan Luis; https://orcid.org/0000-0002-4264-7473; https://orcid.org/0000-0002-8435-3844; https://orcid.org/0000-0001-9842-081X; https://orcid.org/0000-0003-1419-1672
    Data- parallel applications running on heterogeneous high-performance computing platforms require a nonuniform distribution of the workload between available processes. Data partitioning algorithms are formulated as an optimization problem. Departing from the computational performance models of the processes, the goal is to find the partition that minimizes the communication cost. Traditionally, communication volume is the metric used to guide the partitioning. This metric, however, is unable to capture the complexity of current heterogeneous systems, which show uneven communication channels and execute applications with different communication patterns. In this paper, we discuss the role of analytical communication performance models as a metric in partitioning algorithms. First, we describe a method to programmatically predict the communication cost of a data-parallel kernel based on the τ-Lop analytical model. We show that this figure better captures the communication features of applications and platforms. We present results showing that this approach builds partitions that equal or improve the performance of data parallel applications on heterogeneous platforms with respect to previous volume-based strategies.
  • Publicación
    Training deep neural networks: a static load balancing approach
    (Springer, 2020-03-02) Moreno Álvarez, Sergio; Haut, Juan Mario; Paoletti, Mercedes Eugenia; Rico Gallego, Juan Antonio; Díaz Martín, Juan Carlos; Plaza, Javier; https://orcid.org/0000-0003-1030-3729; https://orcid.org/0000-0002-4264-7473; https://orcid.org/0000-0002-8435-3844; https://orcid.org/0000-0002-8908-1606
    Deep neural networks are currently trained under data-parallel setups on high-performance computing (HPC) platforms, so that a replica of the full model is charged to each computational resource using non-overlapped subsets known as batches. Replicas combine the computed gradients to update their local copies at the end of each batch. However, differences in performance of resources assigned to replicas in current heterogeneous platforms induce waiting times when synchronously combining gradients, leading to an overall performance degradation. Albeit asynchronous communication of gradients has been proposed as an alternative, it suffers from the so-called staleness problem. This is due to the fact that the training in each replica is computed using a stale version of the parameters, which negatively impacts the accuracy of the resulting model. In this work, we study the application of well-known HPC static load balancing techniques to the distributed training of deep models. Our approach is assigning a different batch size to each replica, proportional to its relative computing capacity, hence minimizing the staleness problem. Our experimental results (obtained in the context of a remotely sensed hyperspectral image processing application) show that, while the classification accuracy is kept constant, the training time substantially decreases with respect to unbalanced training. This is illustrated using heterogeneous computing platforms, made up of CPUs and GPUs with different performance.
  • Publicación
    A tool to assess the communication cost of parallel kernels on heterogeneous platforms
    (Springer, 2020) Rico Gallego, Juan Antonio; Moreno Álvarez, Sergio; Díaz Martín, Juan Carlos; Lastovetsky, Alexey L.; https://orcid.org/0000-0002-4264-7473; https://orcid.org/0000-0002-8435-3844; https://orcid.org/0000-0001-9460-3897
    Ensuring applications to achieve an efficient usage of resources and fast execution time in the complex current heterogeneous high-performance computing platforms is a paramount problem. Essential efforts to reach the goal are the optimal partitioning of the data space between the processes composing a typical task/data-parallel application, and their right mapping and deployment on the platform. The computational and communication performance modeling describing the platform and the application behaviors is an increasingly recognized approach. This paper discusses the utility of the τ–Lop analytic communication performance model in facing these issues and contributes with a practical symbolic computation tool that represents, manipulates and accurately evaluates the formal communication cost expression derived from a hybrid kernel. We identify a set of scenarios where the tool could be applied, provide with both basic and advanced use examples and evaluate the tool on real-life kernels.
  • Publicación
    Self-Supervised Learning on Small In-Domain Datasets Can Overcome Supervised Learning in Remote Sensing
    (IEEE, 2024) Sanchez-Fernandez, Andres J.; Moreno Álvarez, Sergio; Rico Gallego, Juan Antonio; Tabik, Siham; https://orcid.org/0000-0001-6743-3570; https://orcid.org/0000-0002-4264-7473; https://orcid.org/0000-0003-4093-5356
    The availability of high-resolution satellite images has accelerated the creation of new datasets designed to tackle broader remote sensing (RS) problems. Although popular tasks, such as scene classification, have received significant attention, the recent release of the Land-1.0 RS dataset marks the initiation of endeavors to estimate land-use and land-cover (LULC) fraction values per RGB satellite image. This challenging problem involves estimating LULC composition, i.e., the proportion of different LULC classes from satellite imagery, with major applications in environmental monitoring, agricultural/urban planning, and climate change studies. Currently, supervised deep learning models—the state-of-the-art in image classification—require large volumes of labeled training data to provide good generalization. To face the challenges posed by the scarcity of labeled RS data, self-supervised learning (SSL) models have recently emerged, learning directly from unlabeled data by leveraging the underlying structure. This is the first article to investigate the performance of SSL in LULC fraction estimation on RGB satellite patches using in-domain knowledge. We also performed a complementary analysis on LULC scene classification. Specifically, we pretrained Barlow Twins, MoCov2, SimCLR, and SimSiam SSL models with ResNet-18 using the Sentinel2GlobalLULC small RS dataset and then performed transfer learning to downstream tasks on Land-1.0. Our experiments demonstrate that SSL achieves competitive or slightly better results when trained on a smaller high-quality in-domain dataset of 194 877 samples compared to the supervised model trained on ImageNet-1k with 1 281 167 samples. This outcome highlights the effectiveness of SSL using in-distribution datasets, demonstrating efficient learning with fewer but more relevant data.
  • Publicación
    Federated learning meets remote sensing
    (ELSEVIER, 2024-12-01) Moreno Álvarez, Sergio; Paoletti, Mercedes Eugenia; Sanchez Fernandez, Andres J.; Rico Gallego, Juan Antonio; han, lirong; Haut, Juan M.; https://orcid.org/0000-0003-1030-3729; https://orcid.org/0000-0001-6743-3570; https://orcid.org/0000-0002-4264-7473; https://orcid.org/0000-0002-8613-7037; https://orcid.org/0000-0001-6701-961X
    Remote sensing (RS) imagery provides invaluable insights into characterizing the Earth’s land surface within the scope of Earth observation (EO). Technological advances in capture instrumentation, coupled with the rise in the number of EO missions aimed at data acquisition, have significantly increased the volume of accessible RS data. This abundance of information has alleviated the challenge of insufficient training samples, a common issue in the application of machine learning (ML) techniques. In this context, crowd-sourced data play a crucial role in gathering diverse information from multiple sources, resulting in heterogeneous datasets that enable applications to harness a more comprehensive spatial coverage of the surface. However, the sensitive nature of RS data requires ensuring the privacy of the complete collection. Consequently, federated learning (FL) emerges as a privacy-preserving solution, allowing collaborators to combine such information from decentralized private data collections to build efficient global models. This paper explores the convergence between the FL and RS domains, specifically in developing data classifiers. To this aim, an extensive set of experiments is conducted to analyze the properties and performance of novel FL methodologies. The main emphasis is on evaluating the influence of such heterogeneous and disjoint data among collaborating clients. Moreover, scalability is evaluated for a growing number of clients, and resilience is assessed against Byzantine attacks. Finally, the work concludes with future directions and serves as the opening of a new research avenue for developing efficient RS applications under the FL paradigm. The source code is publicly available at https://github.com/hpc-unex/FLmeetsRS.
  • Publicación
    Optimizing Distributed Deep Learning in Heterogeneous Computing Platforms for Remote Sensing Data Classification
    (IEEE, 2022) Moreno Álvarez, Sergio; Paoletti, Mercedes Eugenia; Rico Gallego, Juan Antonio; Cavallaro, Gabriele; Haut, Juan M.; https://orcid.org/0000-0003-1030-3729; https://orcid.org/0000-0002-4264-7473; https://orcid.org/0000-0002-3239-9904; https://orcid.org/0000-0001-6701-961X
    Applications from Remote Sensing (RS) unveiled unique challenges to Deep Learning (DL) due to the high volume and complexity of their data. On the one hand, deep neural network architectures have the capability to automatically ex-tract informative features from RS data. On the other hand, these models have massive amounts of tunable parameters, re-quiring high computational capabilities. Distributed DL with data parallelism on High-Performance Computing (HPC) sys-tems have proved necessary in dealing with the demands of DL models. Nevertheless, a single HPC system can be al-ready highly heterogeneous and include different computing resources with uneven processing power. In this context, a standard data parallelism strategy does not partition the data efficiently according to the available computing resources. This paper proposes an alternative approach to compute the gradient, which guarantees that the contribution to the gradi-ent calculation is proportional to the processing speed of each DL model's replica. The experimental results are obtained in a heterogeneous HPC system with RS data and demon-strate that the proposed approach provides a significant training speed up and gain in the global accuracy compared to one of the state-of-the-art distributed DL framework.
  • Publicación
    Remote Sensing Image Classification Using CNNs With Balanced Gradient for Distributed Heterogeneous Computing
    (IEEE, 2022) Moreno Álvarez, Sergio; Paoletti, Mercedes Eugenia; Cavallaro, Gabriele; Rico Gallego, Juan Antonio; Haut, Juan M.; https://orcid.org/0000-0003-1030-3729; https://orcid.org/0000-0002-3239-9904; https://orcid.org/0000-0002-4264-7473; https://orcid.org/0000-0001-6701-961X
    Land-cover classification methods are based on the processing of large image volumes to accurately extract representative features. Particularly, convolutional models provide notable characterization properties for image classification tasks. Distributed learning mechanisms on high-performance computing platforms have been proposed to speed up the processing, while achieving an efficient feature extraction. High-performance computing platforms are commonly composed of a combination of central processing units (CPUs) and graphics processing units (GPUs) with different computational capabilities. As a result, current homogeneous workload distribution techniques for deep learning (DL) become obsolete due to their inefficient use of computational resources. To address this, new computational balancing proposals, such as heterogeneous data parallelism, have been implemented. Nevertheless, these techniques should be improved to handle the peculiarities of working with heterogeneous data workloads in the training of distributed DL models. The objective of handling heterogeneous workloads for current platforms motivates the development of this work. This letter proposes an innovative heterogeneous gradient calculation applied to land-cover classification tasks through convolutional models, considering the data amount assigned to each device in the platform while maintaining the acceleration. Extensive experimentation has been conducted on multiple datasets, considering different deep models on heterogeneous platforms to demonstrate the performance of the proposed methodology.