可解释性
计算机科学
图形
人工神经网络
机器学习
深层神经网络
人工智能
忠诚
范围(计算机科学)
数据挖掘
理论计算机科学
电信
程序设计语言
作者
Yuan Li,Li Liu,Guoyin Wang,Yong Du,Penggang Chen
标识
DOI:10.1016/j.knosys.2022.108345
摘要
Graph neural networks are widely utilized for processing data represented by graphs, which renders them ubiquitous in daily life. Due to their excellent performance in extracting features from structural data, graph neural networks have attracted an increasing amount of attention from both academia and industry. Essentially, most GNN models learn representations of nodes by fully/randomly aggregating their neighbor features. However, these unsophisticatedly-designed aggregation schemes always lead to a lack of interpretability, compromising the scope of adoption of GNN models. This study attempts to construct a transparent and explainable GNN model by distilling knowledge from pretrained "black-box" models. Specifically, by simultaneously preserving fidelity to the behaviors of the original model and optimizing the loss of prediction, a shallow graph neural network with explicit "contribution" weights between two nodes is trained. Next, a neighbor selection strategy is built upon these explicit weights to ensure high levels of performance and interpretability. To evaluate the proposed framework, our method is incorporated into four state-of-the-art models: GCN, GAT, GraphSAGE, and AM-GCN. Experimental results on three real-world datasets show the effectiveness of the proposed framework.
科研通智能强力驱动
Strongly Powered by AbleSci AI