Examinando por Autor "Ortega, Marcos"
Mostrando 1 - 3 de 3
Resultados por página
Opciones de ordenación
Publicación Deformable registration of multimodal retinal images using a weakly supervised deep learning approach(Springer, 2023-03-28) Martínez Río, Javier; Carmona, Enrique J.; Cancelas, Daniel; Novo, Jorge; Ortega, MarcosThere are different retinal vascular imaging modalities widely used in clinical practice to diagnose different retinal pathologies. The joint analysis of these multimodal images is of increasing interest since each of them provides common and complementary visual information. However, if we want to facilitate the comparison of two images, obtained with different techniques and containing the same retinal region of interest, it will be necessary to make a previous registration of both images. Here, we present a weakly supervised deep learning methodology for robust deformable registration of multimodal retinal images, which is applied to implement a method for the registration of fluorescein angiography (FA) and optical coherence tomography angiography (OCTA) images. This methodology is strongly inspired by VoxelMorph, a general unsupervised deep learning framework of the state of the art for deformable registration of unimodal medical images. The method was evaluated in a public dataset with 172 pairs of FA and superficial plexus OCTA images. The degree of alignment of the common information (blood vessels) and preservation of the non-common information (image background) in the transformed image were measured using the Dice coefficient (DC) and zero-normalized cross-correlation (ZNCC), respectively. The average values of the mentioned metrics, including the standard deviations, were DC = 0.72 ± 0.10 and ZNCC = 0.82 ± 0.04. The time required to obtain each pair of registered images was 0.12 s. These results outperform rigid and deformable registration methods with which our method was compared.Publicación Modeling, localization, and segmentation of the foveal avascular zone on retinal OCT-angiography images(IEEE, 2020-08-17) Carmona, Enrique J.; Díaz González, Macarena; Novo, Jorge; Ortega, MarcosThe Foveal Avascular Zone (FAZ) is a capillary-free area that is placed inside the macula and its morphology and size represent important biomarkers to detect different ocular pathologies such as diabetic retinopathy, impaired vision or retinal vein occlusion. Therefore, an adequate and precise segmentation of the FAZ presents a high clinical interest. About to this, Angiography by Optical Coherence Tomography (OCT-A) is a non-invasive imaging technique that allows the expert to visualize the vascular and avascular foveal zone. In this work, we present a robust methodology composed of three stages to model, localize, and segment the FAZ in OCT-A images. The first stage is addressed to generate two FAZ normality models: superficial and deep plexus. The second one uses the FAZ model as a template to localize the FAZ center. Finally, in the third stage, an adaptive binarization is proposed to segment the entire FAZ region. A method based on this methodology was implemented and validated in two OCT-A image subsets, presenting the second subset more challenging pathological conditions than the first. We obtained localization success rates of 100% and 96% in the first and second subsets, respectively, considering a success if the obtained FAZ center is inside the FAZ area segmented by an expert clinician. Complementary, the Dice score and other indexes (Jaccard index and Hausdorff distance) are used to measure the segmentation quality, obtaining competitive average values in the first subset: 0.84 ± 0.01 (expert 1) and 0.85 ± 0.01 (expert 2). The average Dice score obtained in the second subset was also acceptable (0.70 ± 0.17), even though the segmentation process is more complex in this case.Publicación Robust multimodal registration of fluorescein angiography and optical coherence tomography angiography images using evolutionary algorithms(Elsevier, 2021-07) Martínez Río, Javier; Carmona, Enrique J.; Cancelas, Daniel; Novo, Jorge; Ortega, MarcosOptical coherence tomography angiography (OCTA) and fluorescein angiography (FA) are two different vascular imaging modalities widely used in clinical practice to diagnose and grade different relevant retinal pathologies. Although each of them has its advantages and disadvantages, the joint analysis of the images produced by both techniques to analyze a specific area of the retina is of increasing interest, given that they provide common and complementary visual information. However, in order to facilitate this analysis task, a previous registration of the pair of FA and OCTA images is desirable in order to superimpose their common areas and focus the gaze on the regions of interest. Normally, this task is manually carried out by the expert clinician, but it turns out to be tedious and time-consuming. Here, we present a three-stage methodology for robust multimodal registration of FA and superficial plexus OCTA images. The first one is a preprocessing stage devoted to reducing the noise and segmenting the main vessels in both types of images. The second stage uses the vessel information to do an approximate registration based on template matching. Lastly, the third stage uses an evolutionary algorithm based on differential evolution to refine the previous registration and obtain the optimal registration. The method was evaluated in a dataset with 172 pairs of FA and OCTA images, obtaining a success rate of 98.8%. The best mean execution time of the method was less than 5 s per image.