可解释性
计算机科学
相关性(法律)
人工神经网络
人工智能
机器学习
图层(电子)
黑匣子
深层神经网络
度量(数据仓库)
反向传播
数据挖掘
化学
有机化学
政治学
法学
作者
Marilyn Bello,Gonzalo Nápoles,Koen Vanhoof,María García,Rafael Bello
标识
DOI:10.1109/ijcnn55064.2022.9892239
摘要
Neural networks are considered a black-box model as their strength in modeling complex interactions makes its operation almost impossible to explain. Still, neural networks remain very interesting tools as they have shown promising performance in various classification tasks. Layer-wise relevance propagation is a technique that, based on a propagation approach, is able to explain the predictions obtained by a neural network. In this work, we propose four adaptations of this technique to operate on multi-label neural networks. The proposed methods provide new ways of distributing the relevance between the output layer and the preceding ones. The efficacy of these adaptations is demonstrated after an experimental study. The study is carried out based on existing evaluation criteria in the literature that measure the explanation's quality. These methods are applied to a case study in which a neural network is used to detect secondary coinfections in patients infected with SARS-CoV-2. Overall, the proposed methods provide a post-hoc interpretability stage of the results.
科研通智能强力驱动
Strongly Powered by AbleSci AI