可解释性
计算机科学
人工智能
领域(数学)
管理科学
透视图(图形)
口译(哲学)
人工神经网络
选择(遗传算法)
数据科学
机器学习
深层神经网络
光学(聚焦)
工程类
数学
程序设计语言
纯数学
物理
光学
作者
Wojciech Samek,Grégoire Montavon,Sebastian Lapuschkin,Christopher J. Anders,Klaus‐Robert Müller
出处
期刊:Proceedings of the IEEE
[Institute of Electrical and Electronics Engineers]
日期:2021-03-01
卷期号:109 (3): 247-278
被引量:335
标识
DOI:10.1109/jproc.2021.3060483
摘要
With the broader and highly successful usage of machine learning (ML) in industry and the sciences, there has been a growing demand for explainable artificial intelligence (XAI). Interpretability and explanation methods for gaining a better understanding of the problem-solving abilities and strategies of nonlinear ML, in particular, deep neural networks, are, therefore, receiving increased attention. In this work, we aim to: 1) provide a timely overview of this active emerging field, with a focus on “post hoc” explanations, and explain its theoretical foundations; 2) put interpretability algorithms to a test both from a theory and comparative evaluation perspective using extensive simulations; 3) outline best practice aspects, i.e., how to best include interpretation methods into the standard usage of ML; and 4) demonstrate successful usage of XAI in a representative selection of application scenarios. Finally, we discuss challenges and possible future directions of this exciting foundational field of ML.
科研通智能强力驱动
Strongly Powered by AbleSci AI