Persona:
García Seco de Herrera, Alba

Cargando...
Foto de perfil
Dirección de correo electrónico
ORCID
Fecha de nacimiento
Proyectos de investigación
Unidades organizativas
Puesto de trabajo
Apellidos
García Seco de Herrera
Nombre de pila
Alba
Nombre

Resultados de la búsqueda

Mostrando 1 - 5 de 5
  • Publicación
    Medical images modality classification using discrete Bayesian networks
    (Elsevier, 2016-10) Arias, Jacinto; Martínez-Gómez, Jesús; Gámez, Jose A.; García Seco de Herrera, Alba; Müller, Henning
    In this paper we propose a complete pipeline for medical image modality classification focused on the application of discrete Bayesian network classifiers. Modality refers to the categorization of biomedical images from the literature according to a previously defined set of image types, such as X-ray, graph or gene sequence. We describe an extensive pipeline starting with feature extraction from images, data combination, pre-processing and a range of different classification techniques and models. We study the expressive power of several image descriptors along with supervised discretization and feature selection to show the performance of discrete Bayesian networks compared to the usual deterministic classifiers used in image classification. We perform an exhaustive experimentation by using the ImageCLEFmed 2013 collection. This problem presents a high number of classes so we propose several hierarchical approaches. In a first set of experiments we evaluate a wide range of parameters for our pipeline along with several classification models. Finally, we perform a comparison by setting up the competition environment between our selected approaches and the best ones of the original competition. Results show that the Bayesian Network classifiers obtain very competitive results. Furthermore, the proposed approach is stable and it can be applied to other problems that present inherent hierarchical structures of classes.
  • Publicación
    Foot Recognition Using Deep Learning for Knee Rehabilitation
    (ASET, 2019) Duangsoithong, Rakkrit; Jaruenpunyasak, Jermphiphut; García Seco de Herrera, Alba
    The use of foot recognition can be applied in many medical fields such as the gait pattern analysis and the knee exercises of patients in rehabilitation. Generally, a camera-based foot recognition system is intended to capture a patient image in a controlled room and background to recognize the foot in the limited views. However, this system can be inconvenient to monitor the knee exercises at home. In order to overcome these problems, this paper proposes to use the deep learning method using Convolutional Neural Networks (CNNs) for foot recognition. The results are compared with the traditional classification method using LBP and HOG features with kNN and SVM classifiers. According to the results, deep learning method provides better accuracy but with higher complexity to recognize the foot images from online databases than the traditional classification method.
  • Publicación
    Shangri–La: A medical case–based retrieval tool
    (Wiley, 2018-11-28) García Seco de Herrera, Alba; Schaer, Roger; Müller, Henning
    Large amounts of medical visual data are produced in hospitals daily and made available continuously via publications in the scientific literature, representing the medical knowledge. However, it is not always easy to find the desired information and in clinical routine the time to fulfil an information need is often very limited. Information retrieval systems are a useful tool to provide access to these documents/images in the biomedical literature related to information needs of medical professionals. Shangri–La is a medical retrieval system that can potentially help clinicians to make decisions on difficult cases. It retrieves articles from the biomedical literature when querying a case description and attached images. The system is based on a multimodal retrieval approach with a focus on the integration of visual information connected to text. The approach includes a query–adaptive multimodal fusion criterion that analyses if visual features are suitable to be fused with text for the retrieval. Furthermore, image modality information is integrated in the retrieval step. The approach is evaluated using the ImageCLEFmed 2013 medical retrieval benchmark and can thus be compared to other approaches. Results show that the final approach outperforms the best multimodal approach submitted to ImageCLEFmed 2013.
  • Publicación
    Comparing fusion techniques for the ImageCLEF 2013 medical case retrieval task
    (Elsevier, 2014-03-27) García Seco de Herrera, Alba; Roger Schaer; Dimitrios Markonis; Henning Müller
    Retrieval systems can supply similar cases with a proven diagnosis to a new example case under observation to help clinicians during their work. The ImageCLEFmed evaluation campaign proposes a framework where research groups can compare case–based retrieval approaches. This paper focuses on the case–based task and adds results of the compound figure separation and modality classification tasks. Several fusion approaches are compared to identify the approaches best adapted to the heterogeneous data of the task. Fusion of visual and textual features is analyzed, demonstrating that the selection of the fusion strategy can improve the best performance on the case–based retrieval task.
  • Publicación
    Evaluating performance of biomedical image retrieval systems-An overview of the medical image retrieval task at ImageCLEF 2004-2013
    (Elsevier, 2015-01) Kalpathy-Cramer, Jayashree; García Seco de Herrera, Alba; Demner-Fushman, Dina; Antani, Sameer; Bedrick, Steven; Müller, Henning
    Medical image retrieval and classification have been extremely active research topics over the past 15 years. With the ImageCLEF benchmark in medical image retrieval and classification a standard test bed was created that allows researchers to compare their approaches and ideas on increasingly large and varied data sets including generated ground truth. This article describes the lessons learned in ten evaluations campaigns. A detailed analysis of the data also highlights the value of the resources created.