计算机科学
人工神经网络
反向传播
多样性(控制论)
人工智能
集合(抽象数据类型)
深度学习
相关性(法律)
机器学习
不确定性传播
图层(电子)
算法
政治学
有机化学
化学
程序设计语言
法学
作者
Grégoire Montavon,Alexander Binder,Sebastian Lapuschkin,Wojciech Samek,Klaus Robert Müller
标识
DOI:10.1007/978-3-030-28954-6_10
摘要
For a machine learning model to generalize well, one needs to ensure that its decisions are supported by meaningful patterns in the input data. A prerequisite is however for the model to be able to explain itself, e.g. by highlighting which input features it uses to support its prediction. Layer-wise Relevance Propagation (LRP) is a technique that brings such explainability and scales to potentially highly complex deep neural networks. It operates by propagating the prediction backward in the neural network, using a set of purposely designed propagation rules. In this chapter, we give a concise introduction to LRP with a discussion of (1) how to implement propagation rules easily and efficiently, (2) how the propagation procedure can be theoretically justified as a ‘deep Taylor decomposition’, (3) how to choose the propagation rules at each layer to deliver high explanation quality, and (4) how LRP can be extended to handle a variety of machine learning scenarios beyond deep neural networks.
科研通智能强力驱动
Strongly Powered by AbleSci AI