RodrÃguez Sampayo, Marta2024-05-202024-05-202021-09-01https://hdl.handle.net/20.500.14468/14675Nowadays, research in the field of artificial intelligence is focusing on the explainability of the developed algorithms, mainly neural networks. This trend is known as XAI and brings certain advantages such as increased confidence in the decision-making process, improved capacity for error analysis, verification of results and possibility of model refinement, among others. In this work we have focused on interpreting the predictions of recently developed deep learning models through different visualization techniques. The use case we introduce is the detection of breast cancer through the classification of mammographies, since the medical field is widely benefited by the contributions of XAI methods. Furthermore, the target neural networks are based on recent and poorly explored architectures. These are the Vision Transformer model, built through attention blocks, and EfficientNet, designed to improve the performance of convolutional networks.eninfo:eu-repo/semantics/openAccessQualitative analysis through visual interpretability techniques of neural network models for mammography classificationtesis de maestrÃaExplainable Artificial IntelligenceinterpretabilityDeep LearningEfficientNetvision transformermammography