Publicación:
Heterogeneous gradient computing optimization for scalable deep neural networks

dc.contributor.authorMoreno Álvarez, Sergio
dc.contributor.authorPaoletti, Mercedes Eugenia
dc.contributor.authorRico Gallego, Juan Antonio
dc.contributor.authorHaut, Juan M.
dc.contributor.orcidhttps://orcid.org/0000-0003-1030-3729
dc.contributor.orcidhttps://orcid.org/0000-0002-4264-7473
dc.contributor.orcidhttps://orcid.org/0000-0001-6701-961X
dc.date.accessioned2024-11-18T11:34:54Z
dc.date.available2024-11-18T11:34:54Z
dc.date.issued2022
dc.descriptionThe registered version of this article, first published in “The Journal of Supercomputing, 78, 2022", is available online at the publisher's website: Springer, https://doi.org/10.1007/s11227-022-04399-2 La versión registrada de este artículo, publicado por primera vez en “The Journal of Supercomputing, 78, 2022", está disponible en línea en el sitio web del editor: Springer, https://doi.org/10.1007/s11227-022-04399-2
dc.description.abstractNowadays, data processing applications based on neural networks cope with the growth in the amount of data to be processed and with the increase in both the depth and complexity of the neural networks architectures, and hence in the number of parameters to be learned. High-performance computing platforms are provided with fast computing resources, including multi-core processors and graphical processing units, to manage such computational burden of deep neural network applications. A common optimization technique is to distribute the workload between the processes deployed on the resources of the platform. This approach is known as data-parallelism. Each process, known as replica, trains its own copy of the model on a disjoint data partition. Nevertheless, the heterogeneity of the computational resources composing the platform requires to unevenly distribute the workload between the replicas according to its computational capabilities, to optimize the overall execution performance. Since the amount of data to be processed is different in each replica, the influence of the gradients computed by the replicas in the global parameter updating should be different. This work proposes a modification of the gradient computation method that considers the different speeds of the replicas, and hence, its amount of data assigned. The experimental results have been conducted on heterogeneous high-performance computing platforms for a wide range of models and datasets, showing an improvement in the final accuracy with respect to current techniques, with a comparable performance.en
dc.description.versionversión final
dc.identifier.citationSergio Moreno-Álvarez, Mercedes E Paoletti, Juan A Rico-Gallego, Juan M Haut. "Heterogeneous gradient computing optimization for scalable deep neural networks". The Journal of Supercomputing, 78, 11, 19 March 2022, 13455-13469.
dc.identifier.doihttps://doi.org/10.1007/s11227-022-04399-2
dc.identifier.issn0920-8542
dc.identifier.urihttps://hdl.handle.net/20.500.14468/24403
dc.journal.titleThe Journal of Supercomputing
dc.journal.volume78
dc.language.isoen
dc.page.final13469
dc.page.initial13455
dc.publisherSpringer
dc.relation.centerFacultades y escuelas::E.T.S. de Ingeniería Informática
dc.relation.departmentLenguajes y Sistemas Informáticos
dc.rightsinfo:eu-repo/semantics/openAccess
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/deed.es
dc.subject12 Matemáticas::1203 Ciencia de los ordenadores ::1203.17 Informática
dc.subject.keywordsdeep learningen
dc.subject.keywordsdeep neural networksen
dc.subject.keywordshigh-performance computingen
dc.subject.keywordsheterogeneous platformsen
dc.subject.keywordsdistributed trainingen
dc.titleHeterogeneous gradient computing optimization for scalable deep neural networksen
dc.typeartículoes
dc.typejournal articleen
dspace.entity.typePublication
relation.isAuthorOfPublication3482d7bc-e120-48a3-812e-cc4b25a6d2fe
relation.isAuthorOfPublication.latestForDiscovery3482d7bc-e120-48a3-812e-cc4b25a6d2fe
Archivos
Bloque original
Mostrando 1 - 1 de 1
Cargando...
Miniatura
Nombre:
MorenoAlvarez_Sergio_2022HeterogeneousGradien_SERGIO MORENO ALVARE.pdf
Tamaño:
1.86 MB
Formato:
Adobe Portable Document Format
Bloque de licencias
Mostrando 1 - 1 de 1
No hay miniatura disponible
Nombre:
license.txt
Tamaño:
3.62 KB
Formato:
Item-specific license agreed to upon submission
Descripción: