计算机科学
卷积神经网络
拓扑(电路)
空格(标点符号)
人工智能
模式识别(心理学)
数学
组合数学
操作系统
作者
Clara I. López-González,María J. Gómez-Silva,Eva Besada-Portas,Gonzalo Pájares
出处
期刊:Neurocomputing
[Elsevier BV]
日期:2024-05-09
卷期号:593: 127806-127806
标识
DOI:10.1016/j.neucom.2024.127806
摘要
The development of explainability methods for Convolutional Neural Networks (CNNs), under the growing framework of explainable Artificial Intelligence (xAI) for image understanding, is crucial due to neural networks success in contrast with their black box nature. However, usual methods focus on image visualizations and are inadequate to analyze the encoded contextual information (that captures the spatial dependencies of pixels considering their neighbors), as well as to explain the evolution of learning across layers without degrading the information. To address the latter, this paper presents a novel explanatory method based on the study of the latent representations of CNNs through their topology, and supported by Topological Data Analysis (TDA). For each activation layer after a convolution, the pixel values of the activation maps along the channels are considered latent space points. The persistent homology of this data is summarized via persistence landscapes, called Latent Landscapes. This provides a global view of how contextual information is being encoded, its variety and evolution, and allows for statistical analysis. The applicability and effectiveness of our approach is demonstrated by experiments conducted with CNNs trained on distinct datasets: (1) two U-Net segmentation models on RGB and pseudo-multiband images (generated by considering vegetation indices) from the agricultural benchmark CRBD were evaluated, in order to explain the difference in performance; and (2) a VGG-16 classification network on CIFAR-10 (RGB) was analyzed, showing how the information evolves within the network. Moreover, comparisons with state-of-the-art methods (Grad-CAM and occlusion) prove the consistency and validity of our proposal. It offers novel insights into the decision making process and helps to compare how models learn.
科研通智能强力驱动
Strongly Powered by AbleSci AI