Persona: García Seco de Herrera, Alba
Cargando...
Dirección de correo electrónico
ORCID
Fecha de nacimiento
Proyectos de investigación
Unidades organizativas
Puesto de trabajo
Apellidos
García Seco de Herrera
Nombre de pila
Alba
Nombre
3 resultados
Resultados de la búsqueda
Mostrando 1 - 3 de 3
Publicación Shangri–La: A medical case–based retrieval tool(Wiley, 2018-11-28) García Seco de Herrera, Alba; Schaer, Roger; Müller, HenningLarge amounts of medical visual data are produced in hospitals daily and made available continuously via publications in the scientific literature, representing the medical knowledge. However, it is not always easy to find the desired information and in clinical routine the time to fulfil an information need is often very limited. Information retrieval systems are a useful tool to provide access to these documents/images in the biomedical literature related to information needs of medical professionals. Shangri–La is a medical retrieval system that can potentially help clinicians to make decisions on difficult cases. It retrieves articles from the biomedical literature when querying a case description and attached images. The system is based on a multimodal retrieval approach with a focus on the integration of visual information connected to text. The approach includes a query–adaptive multimodal fusion criterion that analyses if visual features are suitable to be fused with text for the retrieval. Furthermore, image modality information is integrated in the retrieval step. The approach is evaluated using the ImageCLEFmed 2013 medical retrieval benchmark and can thus be compared to other approaches. Results show that the final approach outperforms the best multimodal approach submitted to ImageCLEFmed 2013.Publicación ROCOv2: Radiology Objects in COntext Version 2, an Updated Multimodal Image Dataset(Nature Research, 2024-06-24) Rückert, Johannes; Bloch, Louise; Brünge, Raphael; Idrissi-Yaghir, Ahmad; Schäfer, Henning; Schmidt, Cynthia S.; Koitka, Sven; Pelka, Obioma; Ben Abacha, Asma; García Seco de Herrera, Alba; Müller, Henning; Horn, Peter A.; Nensa, Felix; Friedrich, Christoph M.; https://orcid.org/0000-0002-5038-5899; https://orcid.org/0000-0001-7540-4980; https://orcid.org/0000-0002-6046-4048; https://orcid.org/0000-0003-1507-9690; https://orcid.org/0000-0002-4123-0406; https://orcid.org/0000-0003-1994-0687; https://orcid.org/0000-0001-9704-1180; https://orcid.org/0000-0001-5156-4429Automated medical image analysis systems often require large amounts of training data with high quality labels, which are difficult and time consuming to generate. This paper introduces Radiology Object in COntext version 2 (ROCOv2), a multimodal dataset consisting of radiological images and associated medical concepts and captions extracted from the PMC Open Access subset. It is an updated version of the ROCO dataset published in 2018, and adds 35,705 new images added to PMC since 2018. It further provides manually curated concepts for imaging modalities with additional anatomical and directional concepts for X-rays. The dataset consists of 79,789 images and has been used, with minor modifications, in the concept detection and caption prediction tasks of ImageCLEFmedical Caption 2023. The dataset is suitable for training image annotation models based on image-caption pairs, or for multi-label image classification using Unified Medical Language System (UMLS) concepts provided with each image. In addition, it can serve for pre-training of medical domain models, and evaluation of deep learning models for multi-task learning.Publicación Evaluating performance of biomedical image retrieval systems-An overview of the medical image retrieval task at ImageCLEF 2004-2013(Elsevier, 2015-01) Kalpathy-Cramer, Jayashree; García Seco de Herrera, Alba; Demner-Fushman, Dina; Antani, Sameer; Bedrick, Steven; Müller, HenningMedical image retrieval and classification have been extremely active research topics over the past 15 years. With the ImageCLEF benchmark in medical image retrieval and classification a standard test bed was created that allows researchers to compare their approaches and ideas on increasingly large and varied data sets including generated ground truth. This article describes the lessons learned in ten evaluations campaigns. A detailed analysis of the data also highlights the value of the resources created.