计算机科学
一般化
图形
人工智能
非线性系统
人工神经网络
理论计算机科学
选型
机器学习
数学
物理
量子力学
数学分析
作者
Qiang Huang,Makoto Yamada,Yuan Tian,Dinesh Singh,Yi Chang
标识
DOI:10.1109/tkde.2022.3187455
摘要
Graph structured data has wide applicability in various domains such as physics, chemistry, biology, computer vision, and social networks, to name a few. Recently, graph neural networks (GNN) were shown to be successful in effectively representing graph structured data because of their good performance and generalization ability. However, explaining the effectiveness of GNN models is a challenging task because of the complex nonlinear transformations made over the iterations. In this paper, we propose GraphLIME, a local interpretable model explanation for graphs using the Hilbert-Schmidt Independence Criterion (HSIC) Lasso, which is a nonlinear feature selection method. GraphLIME is a generic GNN-model explanation framework that learns a nonlinear interpretable model locally in the subgraph of the node being explained. Through experiments on two real-world datasets, the explanations of GraphLIME are found to be of extraordinary degree and more descriptive in comparison to the existing explanation methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI