计算机科学
忠诚
图形
边界判定
人工智能
水准点(测量)
人工神经网络
机器学习
理论计算机科学
作者
Ji Chaojie,Ruxin Wang,Hongyan Wu
标识
DOI:10.1016/j.neucom.2022.04.070
摘要
While graph neural networks (GNNs) have shown great potential in various graph-related tasks, their lack of transparency has hindered our understanding of how they arrive at their predictions. The fidelity to the local decision boundary of the original model, indicating how well the explainer fits the original model around the instance to be explained, is neglected by existing GNN explainers. In this paper, we first propose a novel post hoc framework based on local fidelity for any trained GNNs, called TraP2 , which can generate a high-fidelity explanation. Considering that both the relevant graph structure and important features inside each node must be highlighted, a three-layer architecture in TraP2 is designed: i) the interpretation domain is defined by the Tra nslation layer in advance; ii) the local predictive behaviors of the GNNs being explained are probed and monitored by the P erturbation layer, in which multiple perturbations for graph structure and feature level are conducted in the interpretation domain; and iii) highly faithful explanations are generated by fitting the local decision boundary of GNNs being explained through the P araphrase layer. We evaluated TraP2 on several benchmark datasets under the four metrics of accuracy, area under receiver operating characteristic curve, fidelity, and contrastivity, and the results prove that it significantly outperforms state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI