Reyes Montesinos, JulioRodrigo Yuste, ÁlvaroPeñas Padilla, Anselmo2025-08-062025-08-062025-07-04Reyes-Montesinos, J., Rodrigo, Á. & Peñas, A. None of the above: comparing scenarios for answerability detection in question answering systems. Appl Intell 55, 887 (2025). https://doi.org/10.1007/s10489-025-06765-y0924-669X; eISSN:1573-7497https://doi.org/10.1007/s10489-025-06765-yhttps://hdl.handle.net/20.500.14468/29834The registered version of this article, first published in Applied Intelligence 55, 887 (2025), is available online from the publisher's website: https://doi.org/10.1007/s10489-025-06765-y. La versión registrada de este artículo, publicado por primera vez en Applied Intelligence 55, 887 (2025), está disponible en línea en el sitio web del editor: https://doi.org/10.1007/s10489-025-06765-y.Question Answering (QA) is often used to assess the reasoning capabilities of NLP systems. For a QA system, it is crucial to have the capability to determine answerability– whether the question can be answered with the information at hand. Previous works have studied answerability by including a fixed proportion of unanswerable questions in a collection without explaining the reasons for such proportion or the impact on systems’ results. Furthermore, they do not answer the question of whether systems learn to determine answerability. This work aims to answer that question, providing a systematic analysis of how unanswerable question ratios in training data impact QA systems. To that end, we create a series of versions of the well-known Multiple-Choice QA dataset RACE by modifying different amounts of questions to make them unanswerable, and then train and evaluate several Large Language Models on them. We show that LLMs tend to overfit the distribution of unanswerable questions encountered during training, while the ability to decide on answerability always comes at the expense of finding the answer when it exists. Our experiments also show that a proportion of unanswerable questions around 30%– as found in existing datasets– produces the most discriminating systems. We hope these findings offer useful guidelines for future dataset designers looking to address the problem of answerability.eninfo:eu-repo/semantics/openAccess1203.17 InformáticaNone of the above: comparing scenarios for answerability detection in question answering systemsjournal articleQuestion answeringAnswerabilityMultiple choice