计算机科学
判别式
嵌入
理论计算机科学
人工智能
图形
稳健性(进化)
机器学习
生物化学
化学
基因
作者
Yong-Min Shin,Sun-Woo Kim,Won-Yong Shin
标识
DOI:10.1109/tpami.2024.3379251
摘要
Aside from graph neural networks (GNNs) attracting significant attention as a powerful framework revolutionizing graph representation learning, there has been an increasing demand for explaining GNN models. Although various explanation methods for GNNs have been developed, most studies have focused on instance-level explanations, which produce explanations tailored to a given graph instance. In our study, we propose Prototype-bAsed GNN-Explainer ([Formula: see text]), a novel model-level GNN explanation method that explains what the underlying GNN model has learned for graph classification by discovering human-interpretable prototype graphs. Our method produces explanations for a given class, thus being capable of offering more concise and comprehensive explanations than those of instance-level explanations. First, [Formula: see text] selects embeddings of class-discriminative input graphs on the graph-level embedding space after clustering them. Then, [Formula: see text] discovers a common subgraph pattern by iteratively searching for high matching node tuples using node-level embeddings via a prototype scoring function, thereby yielding a prototype graph as our explanation. Using six graph classification datasets, we demonstrate that [Formula: see text] qualitatively and quantitatively outperforms the state-of-the-art model-level explanation method. We also carry out ystematic experimental studies by demonstrating the relationship between [Formula: see text] and instance-level explanation methods, the robustness of [Formula: see text] to input data scarce environments, and the computational efficiency of the proposed prototype scoring function in [Formula: see text].
科研通智能强力驱动
Strongly Powered by AbleSci AI