Persona: Fernández Amoros, David José
Cargando...
Dirección de correo electrónico
ORCID
0000-0003-3758-0195
Fecha de nacimiento
Proyectos de investigación
Unidades organizativas
Puesto de trabajo
Apellidos
Fernández Amoros
Nombre de pila
David José
Nombre
13 resultados
Resultados de la búsqueda
Mostrando 1 - 10 de 13
Publicación Speeding up derivative configuration from product platforms(MDPI, 2014-06-18) Pérez Morago, Héctor José; Adán, Antonio; Heradio Gil, Rubén; Fernández Amoros, David JoséTo compete in the global marketplace, manufacturers try to differentiate their products by focusing on individual customer needs. Fulfilling this goal requires that companies shift from mass production to mass customization. Under this approach, a generic architecture, named product platform, is designed to support the derivation of customized products through a configuration process that determines which components the product comprises. When a customer configures a derivative, typically not every combination of available components is valid. To guarantee that all dependencies and incompatibilities among the derivative constituent components are satisfied, automated configurators are used. Flexible product platforms provide a big number of interrelated components, and so, the configuration of all, but trivial, derivatives involves considerable effort to select which components the derivative should include. Our approach alleviates that effort by speeding up the derivative configuration using a heuristic based on the information theory concept of entropy.Publicación Exemplar driven development of software product lines(Elsevier, 2012-12-01) Heradio Gil, Rubén; Fernández Amoros, David José; Torre Cubillo, Luis de la; Abad Cardiel, IsmaelThe benefits of following a product line approach to develop similar software systems are well documented. Nevertheless, some case studies have revealed significant barriers to adopt such approach. In order to minimize the paradigm shift between conventional software engineering and software product line engineering, this paper presents a new development process where the products of a domain are made by analogy to an existing product. Furthermore, this paper discusses the capabilities and limitations of different techniques to implement the analogy relation and proposes a new language to overcome such limitations.Publicación A scalable approach to exact model and commonality counting for extended feature models.(Institute of Electrical and Electronics Engineers (IEEE), 2014-05-29) Fernández Amoros, David José; Heradio Gil, Rubén; Cerrada Somolinos, José Antonio; Cerrada Somolinos, CarlosA software product line is an engineering approach to efficient development of software product portfolios. Key to the success of the approach is to identify the common and variable features of the products and the interdependencies between them, which are usually modeled using feature models. Implicitly, such models also include valuable information that can be used by economic models to estimate the payoffs of a product line. Unfortunately, as product lines grow, analyzing large feature models manually becomes impracticable. This paper proposes an algorithm to compute the total number of products that a feature model represents and, for each feature, the number of products that implement it. The inference of both parameters is helpful to describe the standarization/parameterization balance of a product line, detect scope flaws, assess the product line incremental development, and improve the accuracy of economic models. The paper reports experimental evidence that our algorithm has better runtime performance than existing alternative approaches.Publicación Anotación semántica no supervisada(Universidad Nacional de Educación a Distancia (España). Escuela Técnica Superior de Ingeniería Informática. Departamento de Lenguajes y Sistemas Informáticos, 2004-11-29) Fernández Amoros, David José; Gonzalo Arroyo, Julio AntonioEn esta tesis se trata el problema de la desambiguación del sentido de las palabras (i.e. dados un diccionario, una palabra y un contexto, decidir en qué sentido del diccionario se está usando la palabra en el contexto). Las diferentes fuentes de información utilizadas son : 1. La información de origen taxonómico basada en la relación es-un, por ejemplo, un águila es-un pájaro. 2. La información de coocurrencias. Tomando como punto de partida un corpus de casi 300 millones de palabras provinientes de libros en formato electrónico (Proyecto Gutenberg) estudiaremos pares de palabras cuyas apariciones en contextos cortos son estadísticamente dependientes. Utilizaremos varias medidas para calibrar ese grado de dependencia y emplearemos dicha información para desambiguar. 3. Información extraída de la WWW. La información de la glosas del inventario de sentidos serán complementadas con información extraída de la Web. Esta información ha sido extraída de un sistema de clasificación de documentos realizado por voluntarios (Open Directory Project) por Celina Santamaría. 4. Información proviniente de corpora bilingüe comparable. Partiendo de un corpus en inglés y otro en español se han buscado patrones sintácticos superficiales correspondientes a sintagmas nominales en ambos idiomas. A partir de este trabajo realizado por Anselmo Peñas y Fernando López Ostenero estudiaremos si es posible aprovechar las diferencias entre ambos idiomas para detectar estos sintagmas y desambiguar mediante las capacidades translingües de una base de conocimiento léxica (EuroWordNet). Se demostrará que la anotación semántica no supervisada puede lograr buenos resultados, y que hay lineas de investigación, con un importante potencial de mejora, que merecen exploradas.Publicación A literature review on feature diagram product counting and its usage in software product line economic models(World Scientific Publishing, 2013-10-01) Heradio Gil, Rubén; Fernández Amoros, David José; Cerrada Somolinos, José Antonio; Abad Cardiel, IsmaelIn software product line engineering, feature diagrams are a popular means to represent the similarities and differences within a family of related systems. In addition, feature diagrams implicitly model valuable information that can be used in economic models to estimate the cost savings of a product line. In particular, this paper reviews existing proposals on computing the total number of products modeled with a feature diagram and, given a feature, the number of products that implement it. The paper also reviews the economic information that can be estimated when such numbers are known. Thus, this paper contributes by bringing together previously-disparate streams of work: the automated analysis of feature diagrams and economic models for product lines.Publicación Supporting commonality-based analysis of software product lines(Institution of Engineering and Technology (IET), 2011-03-24) Heradio Gil, Rubén; Fernández Amoros, David José; Cerrada Somolinos, José Antonio; Cerrada Somolinos, CarlosSoftware Product Line (SPL) engineering is a cost effective approach to developing families of similar products. Key to the success of this approach is to correctly scope the domain of the SPL, identifying the common and variable features of the products and the interdependencies between features. In this paper, we show how the commonality of a feature (i.e., the reuse ratio of the feature among the products) can be used to detect scope flaws in the early stages of development. SPL domains are usually modeled by means of feature diagrams following the FODA notation. We extend classical FODA trees with unrestricted cardinalities, and present an algorithm to compute the number of products modeled by a feature diagram and the commonality of the features. Finally, we compare the performance of our algorithm with two other approaches built on top of boolean logic SAT-solver technology such as cachet and relsat.Publicación Circuit Testing Based on Fuzzy Sampling with BDD Bases(University of Hawaiʻi at Mānoa, 2023) Pinilla, Elena; Fernández Amoros, David José; Heradio Gil, RubénFuzzy testing of integrated circuits is an established technique. Current approaches generate an approximately uniform random sample from a translation of the circuit to Boolean logic. These approaches have serious scalability issues, which become more pressing with the ever-increasing size of circuits. We propose using a base of binary decision diagrams to sample the translations as a soft computing approach. Uniformity is guaranteed by design and scalability is greatly improved. We test our approach against five other state-of-the-art tools and find our tool to outperform all of them, both in terms of performance and scalability.Publicación A Rule-Learning Approach for Detecting Faults in Highly Configurable Software Systems from Uniform Random Samples(2022) Heradio Gil, Rubén; Fernández Amoros, David José; Ruiz Parrado, Victoria; Cobo, Manuel J.; https://orcid.org/0000-0003-2993-7705; http://orcid.org/ 0000-0001-6575-803XSoftware systems tend to become more and more configurable to satisfy the demands of their increasingly varied customers. Exhaustively testing the correctness of highly configurable software is infeasible in most cases because the space of possible configurations is typically colossal. This paper proposes addressing this challenge by (i) working with a representative sample of the configurations, i.e., a ``uniform'' random sample, and (ii) processing the results of testing the sample with a rule induction system that extracts the faults that cause the tests to fail. The paper (i) gives a concrete implementation of the approach, (ii) compares the performance of the rule learning algorithms AQ, CN2, LEM2, PART, and RIPPER, and (iii) provides empirical evidence supporting our procedurePublicación Pragmatic Random Sampling of the Linux Kernel: Enhancing the Randomness and Correctness of the conf Tool(Association for Computing Machinery, New York, 2024-09-02) Fernández Amoros, David José; Heradio Gil, Rubén; Horcas Aguilera, Jose Miguel; Galindo, José A.; Benavides, David; Fuentes, Lidia; https://orcid.org/0000-0003-3758-0195; https://orcid.org/0000-0002-5677-7156; https://orcid.org/0000-0002-8449-3273; https://orcid.org/0000-0001-9293-9784The configuration space of some systems is so large that it cannot be computed. This is the case with the Linux Kernel, which provides almost 19,000 configurable options described across more than 1,600 files in the Kconfig language. As a result, many analyses of the Kernel rely on sampling its configuration space (e.g., debugging compilation errors, predicting configuration performance, finding the configuration that optimizes specific performance metrics, etc.). The Kernel can be sampled pragmatically, with its built-in tool conf, or idealistically, translating the Kconfig files into logic formulas. The pros of the idealistic approach are that it provides statistical guarantees for the sampled configurations, but the cons are that it sets out many challenging problems that have not been solved yet, such as scalability issues. This paper introduces a new version of conf called randconfig+, which incorporates a series of improvements that increase the randomness and correctness of pragmatic sampling and also help validate the Boolean translation required for the idealistic approach. randconfig+ has been tested on 20,000 configurations generated for 10 different Kernel versions from 2003 to the present day. The experimental results show that randconfig+ is compatible with all tested Kernel versions, guarantees the correctness of the generated configurations, and increases conf’s randomness for numeric and string options.Publicación Uniform and scalable sampling of highly configurable systems(Springer, 2022-01-21) Galindo, José A.; Benavides, David; Batory, Don; Heradio Gil, Rubén; Fernández Amoros, David José; Heradio Gil, Rubén; Fernández Amoros, David JoséMany analyses on configurable software systems are intractable when confronted with colossal and highly-constrained configuration spaces. These analyses could instead use statistical inference, where a tractable sample accurately predicts results for the entire space. To do so, the laws of statistical inference requires each member of the population to be equally likely to be included in the sample, i.e., the sampling process needs to be “uniform”. SAT-samplers have been developed to generate uniform random samples at a reasonable computational cost. However, there is a lack of experimental validation over colossal spaces to show whether the samplers indeed produce uniform samples or not. This paper (i) proposes a new sampler named BDDSampler, (ii) presents a new statistical test to verify sampler uniformity, and (iii) reports the evaluation of BDDSampler and five other state-of-the-art samplers: KUS, QuickSampler, Smarch, Spur, and Unigen2. Our experimental results show only BDDSampler satisfies both scalability and uniformity.