Persona:
Fresno Fernández, Víctor Diego

Cargando...
Foto de perfil
Dirección de correo electrónico
ORCID
0000-0003-4270-2628
Fecha de nacimiento
Proyectos de investigación
Unidades organizativas
Puesto de trabajo
Apellidos
Fresno Fernández
Nombre de pila
Víctor Diego
Nombre

Resultados de la búsqueda

Mostrando 1 - 2 de 2
  • Publicación
    Test-driving information theory-based compositional distributional semantics: A case study on Spanish song lyrics
    (ELSEVIER, 2025-06-15) Ghajari Espinosa, Adrián; Benito Santos, Alejandro; Ros Muñoz, Salvador; Fresno Fernández, Víctor Diego; González Blanco, Elena
    Song lyrics pose unique challenges for semantic similarity assessment due to their metaphorical language, structural patterns, and cultural nuances - characteristics that often challenge standard natural language processing (NLP) approaches. These challenges stem from a tension between compositional and distributional semantics: while lyrics follow compositional structures, their meaning depends heavily on context and interpretation. The Information Theory-based Compositional Distributional Semantics framework offers a principled approach by integrating information theory with compositional rules and distributional representations. We evaluate eight embedding models on Spanish song lyrics, including multilingual, monolingual contextual, and static embeddings. Results show that multilingual models consistently outperform monolingual alternatives, with the domain-adapted ALBERTI achieving the highest F1 macro scores (78.92 ± 10.86). Our analysis reveals that monolingual models generate highly anisotropic embedding spaces, significantly impacting performance with traditional metrics. The Information Contrast Model metric proves particularly effective, providing improvements up to 18.04 percentage points over cosine similarity. Additionally, composition functions maintaining longer accumulated vector norms consistently outperform standard averaging approaches. Our findings have important implications for NLP applications and challenge standard practices in similarity calculation, showing that effectiveness varies with both task nature and model characteristics.
  • Publicación
    Querying the Depths: Unveiling the Strengths and Struggles of Large Language Models in SPARQL Generation
    (Sociedad Española para el procesamiento del lenguaje natural, 2024-05-15) Ghajari Espinosa, Adrián; Ros Muñoz, Salvador; Pérez Pozo, Álvaro; Fresno Fernández, Víctor Diego; SEPLN, Sociedad Española para el Procesamiento del lenguaje natural
    In the quest to democratize access to databases and knowledge graphs, the ability to express queries in natural language and obtain the requested information becomes paramount, particularly for individuals lacking formal training in query languages. This situation affects SPARQL, the standard for querying ontology-based knowledge graphs, posing a significant barrier to many, hindering their ability to leverage these rich resources for research and analysis. To address this gap, our research delves into harnessing the power of Large Language Models (LLMs) to facilitate the generation of SPARQL queries directly from natural language descriptions. For this purpose, we have explored the most popular prompt engineering techniques, a powerful tool in crafting queries that help generative AI models understand and produce specific or generalized outputs based on the quality of provided prompts, without the need of aditional training. By integrating few-shot learning (FSL), Chain-of-Thought (CoT) reasoning, and Retrieval-Augmented Generation (RAG), we devise prompts that streamline the creation of effective SPARQL queries, facilitating more straightforward access to ontology knowledge graphs. Our analysis involved prompts evaluated across three distinct LLMs: DeepSeek-Code 6.7b, CodeLlama-13b and GPT 3.5 TURBO. The comparative results revealed marginal variations in accuracy among these models, with FSL emerging as the most effective technique. Our results highlight the potential of LLMs to make knowledge graphs more accessible to a broader audience, but also that much more research is needed to get results comparable to human performance.