Persona:
Gonzalo Arroyo, Julio Antonio

Cargando...
Foto de perfil
Dirección de correo electrónico
ORCID
0000-0002-5341-9337
Fecha de nacimiento
Proyectos de investigación
Unidades organizativas
Puesto de trabajo
Apellidos
Gonzalo Arroyo
Nombre de pila
Julio Antonio
Nombre

Resultados de la búsqueda

Mostrando 1 - 3 de 3
  • Publicación
    Automatic Detection of Influencers in Social Networks: Authority versus Domain signals
    (Wiley, 2019-01-07) Rodríguez Vidal, Javier; Anaya Sánchez, Henry; Gonzalo Arroyo, Julio Antonio; Plaza Morales, Laura
    Given the task of finding influencers (opinion makers) for a given domain in a social network, we investigate (a) what is the relative importance of domain and authority signals, (b) what is the most effective way of combining signals (voting, classification, learning to rank, etc.) and how best to model the vocabulary signal, and (c) how large is the gap between supervised and unsupervised methods and what are the practical consequences. Our best results on the RepLab dataset (which improves the state of the art) uses language models to learn the domain-specific vocabulary used by influencers and combines domain and authority models using a Learning to Rank algorithm. Our experiments show that (a) both authority and domain evidence can be trained from the vocabulary of influencers; (b) once the language of influencers is modeled as a likelihood signal, further supervised learning and additional network-based signals only provide marginal improvements; and (c) the availability of training data sets is crucial to obtain competitive results in the task. Our most remarkable finding is that influencers do use a distinctive vocabulary, which is a more reliable signal than nontextual network indicators such as the number of followers, retweets, and so on.
  • Publicación
    Combining evaluation metrics via the unanimous improvement ratio and its application in weps clustering task
    (Association for the Advancement of Artificial Intelligence, 2011-12-01) Artiles Picón, Javier ; Verdejo, M. Felisa; Amigo Cabrera, Enrique; Gonzalo Arroyo, Julio Antonio
    Many Artificial Intelligence tasks cannot be evaluated with a single quality criterion and some sort of weighted combination is needed to provide system rankings. A problem of weighted combination measures is that slight changes in the relative weights may produce substantial changes in the system rankings. This paper introduces the Unanimous Improvement Ratio (UIR), a measure that complements standard metric combination criteria (such as van Rijsbergen's F-measure) and indicates how robust the measured differences are to changes in the relative weights of the individual metrics. UIR is meant to elucidate whether a perceived difference between two systems is an artifact of how individual metrics are weighted. Besides discussing the theoretical foundations of UIR, this paper presents empirical results that confirm the validity and usefulness of the metric for the Text Clustering problem, where there is a tradeoff between precision and recall based metrics and results are particularly sensitive to the weighting scheme used to combine them. Remarkably, our experiments show that UIR can be used as a predictor of how well differences between systems measured on a given test bed will also hold in a different test bed.
  • Publicación
    EvALL: Open Access Evaluation for Information Access Systems
    (Association for Computing Machinery (ACM), 2017) Almagro Cádiz, Mario; Rodríguez Vidal, Javier; Verdejo, M. Felisa; Amigo Cabrera, Enrique; Carrillo de Albornoz Cuadrado, Jorge Amando; Gonzalo Arroyo, Julio Antonio
    The EvALL online evaluation service aims to provide a unified evaluation framework for Information Access systems that makes results completely comparable and publicly available for the whole research community. For researchers working on a given test collection, the framework allows to: (i) evaluate results in a way compliant with measurement theory and with state-of-the-art evaluation practices in the field; (ii) quantitatively and qualitatively compare their results with the state of the art; (iii) provide their results as reusable data to the scientific community; (iv) automatically generate evaluation figures and (low-level) interpretation of the results, both as a pdf report and as a latex source. For researchers running a challenge (a comparative evaluation campaign on shared data), the framework helps them to manage, store and evaluate submissions, and to preserve ground truth and system output data for future use by the research community. EvALL can be tested at http://evall.uned.es.