概化理论
计算机科学
人工智能
一般化
机器学习
归纳偏置
人工神经网络
参数化复杂度
图形
钥匙(锁)
理论计算机科学
算法
多任务学习
数学
统计
数学分析
计算机安全
经济
管理
任务(项目管理)
作者
Dongsheng Luo,Tianxiang Zhao,Wei Cheng,Dongkuan Xu,Feng Han,Wenchao Yu,Xiao Liu,Haifeng Chen,X. D. Zhang
标识
DOI:10.1109/tpami.2024.3362584
摘要
Despite recent progress in Graph Neural Networks (GNNs), explaining predictions made by GNNs remains a challenging and nascent problem. The leading method mainly considers the local explanations, i.e., important subgraph structure and node features, to interpret why a GNN model makes the prediction for a single instance, e.g. a node or a graph. As a result, the explanation generated is painstakingly customized at the instance level. The unique explanation interpreting each instance independently is not sufficient to provide a global understanding of the learned GNN model, leading to the lack of generalizability and hindering it from being used in the inductive setting. Besides, training the explanation model explaining for each instance is time-consuming for large-scale real-life datasets. In this study, we address these key challenges and propose PGExplainer, a parameterized explainer for GNNs. PGExplainer adopts a deep neural network to parameterize the generation process of explanations, which renders PGExplainer a natural approach to multi-instance explanations. Compared to the existing work, PGExplainer has better generalization ability and can be utilized in an inductive setting without training the model for new instances. Thus, PGExplainer is much more efficient than the leading method with significant speed-up. In addition, the explanation networks can also be utilized as a regularizer to improve the generalization power of existing GNNs when jointly trained with downstream tasks. Experiments on both synthetic and real-life datasets show highly competitive performance with up to 24.7% relative improvement in AUC on explaining graph classification over the leading baseline.
科研通智能强力驱动
Strongly Powered by AbleSci AI