Moreno Álvarez, SergioPaoletti, Mercedes EugeniaRico Gallego, Juan AntonioCavallaro, GabrieleHaut, Juan M.2024-11-192024-11-192022S. Moreno-Álvarez, M. E. Paoletti, J. A. Rico, G. Cavallaro and J. M. Haut, "Optimizing Distributed Deep Learning in Heterogeneous Computing Platforms for Remote Sensing Data Classification," IGARSS 2022 - 2022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 2022, pp. 2726-2729, doi: 10.1109/IGARSS46834.2022.9883762.978-1-6654-2792-02153-6996 | e2153-7003https://doi.org/10.1109/IGARSS46834.2022.9883762https://hdl.handle.net/20.500.14468/24422The registered version of this article, first published in “Institute of Electrical and Electronics Engineers Inc, 2022", is available online at the publisher's website: IEEE, https://doi.org/10.1109/IGARSS46834.2022.9883762 La versión registrada de este artículo, publicado por primera vez en “Institute of Electrical and Electronics Engineers Inc, 2022", está disponible en línea en el sitio web del editor: IEEE, https://doi.org/10.1109/IGARSS46834.2022.9883762Applications from Remote Sensing (RS) unveiled unique challenges to Deep Learning (DL) due to the high volume and complexity of their data. On the one hand, deep neural network architectures have the capability to automatically ex-tract informative features from RS data. On the other hand, these models have massive amounts of tunable parameters, re-quiring high computational capabilities. Distributed DL with data parallelism on High-Performance Computing (HPC) sys-tems have proved necessary in dealing with the demands of DL models. Nevertheless, a single HPC system can be al-ready highly heterogeneous and include different computing resources with uneven processing power. In this context, a standard data parallelism strategy does not partition the data efficiently according to the available computing resources. This paper proposes an alternative approach to compute the gradient, which guarantees that the contribution to the gradi-ent calculation is proportional to the processing speed of each DL model's replica. The experimental results are obtained in a heterogeneous HPC system with RS data and demon-strate that the proposed approach provides a significant training speed up and gain in the global accuracy compared to one of the state-of-the-art distributed DL framework.eninfo:eu-repo/semantics/restrictedAccess12 Matemáticas::1203 Ciencia de los ordenadores ::1203.17 InformáticaOptimizing Distributed Deep Learning in Heterogeneous Computing Platforms for Remote Sensing Data Classificationactas de congresotrainingdeep learningcomputational modelingdistributed databasespredictive modelsparallel processingfeature extraction