相关性(法律)
计算机科学
归属
透明度(行为)
代表(政治)
人工智能
领域(数学)
认知科学
深度学习
数据科学
任务(项目管理)
认知心理学
作者
Reduan Achtibat,Maximilian Dreyer,Ilona Eisenbraun,Sebastian Bosse,Thomas Wiegand,Wojciech Samek,Sebastian Lapuschkin
标识
DOI:10.1038/s42256-023-00711-8
摘要
Abstract The field of explainable artificial intelligence (XAI) aims to bring transparency to today’s powerful but opaque deep learning models. While local XAI methods explain individual predictions in the form of attribution maps, thereby identifying ‘where’ important features occur (but not providing information about ‘what’ they represent), global explanation techniques visualize what concepts a model has generally learned to encode. Both types of method thus provide only partial insights and leave the burden of interpreting the model’s reasoning to the user. Here we introduce the Concept Relevance Propagation (CRP) approach, which combines the local and global perspectives and thus allows answering both the ‘where’ and ‘what’ questions for individual predictions. We demonstrate the capability of our method in various settings, showcasing that CRP leads to more human interpretable explanations and provides deep insights into the model’s representation and reasoning through concept atlases, concept-composition analyses, and quantitative investigations of concept subspaces and their role in fine-grained decision-making.
科研通智能强力驱动
Strongly Powered by AbleSci AI