Fecha
2025-07-04
Editor/a
Director/a
Tutor/a
Coordinador/a
Prologuista
Revisor/a
Ilustrador/a
Derechos de acceso
info:eu-repo/semantics/openAccess
Título de la revista
ISSN de la revista
Título del volumen
Editorial
Springer
Resumen
Question Answering (QA) is often used to assess the reasoning capabilities of NLP systems. For a QA system, it is crucial to have the capability to determine answerability– whether the question can be answered with the information at hand. Previous works have studied answerability by including a fixed proportion of unanswerable questions in a collection without explaining the reasons for such proportion or the impact on systems’ results. Furthermore, they do not answer the question of whether systems learn to determine answerability. This work aims to answer that question, providing a systematic analysis of how unanswerable question ratios in training data impact QA systems. To that end, we create a series of versions of the well-known Multiple-Choice QA dataset RACE by modifying different amounts of questions to make them unanswerable, and then train and evaluate several Large Language Models on them. We show that LLMs tend to overfit the distribution of unanswerable questions encountered during training, while the ability to decide on answerability always comes at the expense of finding the answer when it exists. Our experiments also show that a proportion of unanswerable questions around 30%– as found in existing datasets– produces the most discriminating systems. We hope these findings offer useful guidelines for future dataset designers looking to address the problem of answerability.
Descripción
The registered version of this article, first published in Applied Intelligence 55, 887 (2025), is available online from the publisher's website: https://doi.org/10.1007/s10489-025-06765-y.
La versión registrada de este artículo, publicado por primera vez en Applied Intelligence 55, 887 (2025), está disponible en línea en el sitio web del editor: https://doi.org/10.1007/s10489-025-06765-y.
Categorías UNESCO
Palabras clave
Question answering, Answerability, Multiple choice
Citación
Reyes-Montesinos, J., Rodrigo, Á. & Peñas, A. None of the above: comparing scenarios for answerability detection in question answering systems. Appl Intell 55, 887 (2025). https://doi.org/10.1007/s10489-025-06765-y
Centro
E.T.S. de Ingeniería Informática
Departamento
Lenguajes y Sistemas Informáticos



