Deformable registration of multimodal retinal images using a weakly supervised deep learning approach

Martínez Río, Javier, Carmona, Enrique J., Cancelas, Daniel, Novo, Jorge y Ortega, Marcos . (2023) Deformable registration of multimodal retinal images using a weakly supervised deep learning approach. Neural Computing and Applications 35, 14779–14797 (2023)

Ficheros (Some files may be inaccessible until you login with your e-spacio credentials)
Nombre Descripción Tipo MIME Size
Carmona_Suarez_Enrique_DEFORMABLE_REGISTRATION.pdf Carmona_Suarez_Enrique_DEFORMABLE_REGISTRATION.pdf application/pdf 4.12MB

Título Deformable registration of multimodal retinal images using a weakly supervised deep learning approach
Autor(es) Martínez Río, Javier
Carmona, Enrique J.
Cancelas, Daniel
Novo, Jorge
Ortega, Marcos
Materia(s) Informática
Ingeniería Informática
Abstract There are different retinal vascular imaging modalities widely used in clinical practice to diagnose different retinal pathologies. The joint analysis of these multimodal images is of increasing interest since each of them provides common and complementary visual information. However, if we want to facilitate the comparison of two images, obtained with different techniques and containing the same retinal region of interest, it will be necessary to make a previous registration of both images. Here, we present a weakly supervised deep learning methodology for robust deformable registration of multimodal retinal images, which is applied to implement a method for the registration of fluorescein angiography (FA) and optical coherence tomography angiography (OCTA) images. This methodology is strongly inspired by VoxelMorph, a general unsupervised deep learning framework of the state of the art for deformable registration of unimodal medical images. The method was evaluated in a public dataset with 172 pairs of FA and superficial plexus OCTA images. The degree of alignment of the common information (blood vessels) and preservation of the non-common information (image background) in the transformed image were measured using the Dice coefficient (DC) and zero-normalized cross-correlation (ZNCC), respectively. The average values of the mentioned metrics, including the standard deviations, were DC = 0.72 ± 0.10 and ZNCC = 0.82 ± 0.04. The time required to obtain each pair of registered images was 0.12 s. These results outperform rigid and deformable registration methods with which our method was compared.
Palabras clave Multimodal image registration
Diffeomorphic transformation
Deep learning
VoxelMorph
OCT angiography
Fluorescein angiography
Editor(es) Springer
Fecha 2023-03-28
Formato application/pdf
Identificador bibliuned:95-Ejcarmona-0001
http://e-spacio.uned.es/fez/view/bibliuned:95--Ejcarmona-0001
DOI - identifier https://doi.org/10.1007/s00521-023-08454-8
ISSN - identifier 1433-3058
Nombre de la revista Neural Computing and Applications
Número de Volumen 35
Página inicial 14779
Página final 14797
Publicado en la Revista Neural Computing and Applications 35, 14779–14797 (2023)
Idioma eng
Versión de la publicación publishedVersion
Tipo de recurso Article
Derechos de acceso y licencia http://creativecommons.org/licenses/by-nc-nd/4.0
info:eu-repo/semantics/openAccess
Tipo de acceso Acceso abierto
Notas adicionales The registered version of this article, first published in "Neural Computing and Applications 35, 14779–14797 (2023)", is available online at the publisher's website: Springer, https://doi.org/10.1007/s00521-023-08454-8
Notas adicionales La versión registrada de este artículo, publicada por primera vez en "Neural Computing and Applications 35, 14779–14797 (2023)", está disponible en línea en el sitio web del editor: Springer, https://doi.org/10.1007/s00521-023-08454-8

 
Versiones
Versión Tipo de filtro
Contador de citas: Google Scholar Search Google Scholar
Estadísticas de acceso: 36 Visitas, 4 Descargas  -  Estadísticas en detalle
Creado: Mon, 08 Apr 2024, 22:05:20 CET