简单(哲学)
计算机科学
通知
计算
图形
深层神经网络
人工神经网络
人工智能
理论计算机科学
算法
哲学
认识论
政治学
法学
作者
Guangxin Su,Hanchen Wang,Ying Zhang,Wenjie Zhang,Xuemin Lin
标识
DOI:10.1016/j.knosys.2024.111649
摘要
Graph Attention Networks (GATs) and Graph Convolutional Neural Networks (GCNs) are two state-of-the-art architectures in Graph Neural Networks (GNNs). It is well known that both models suffer from performance degradation when more GNN layers are stacked, and many works have been devoted to address this problem. We notice that main research efforts in the line focus on the GCN models, and their techniques cannot well fit the GAT models due to the inherent difference between these two architectures. In GAT, the attention mechanism is limited as it ignores the overwhelming propagation from certain nodes as the number of layers increases. To sufficiently utilize the expressive power of GAT, we propose a new version of GAT named Layer-wise Self-adaptive GAT (LSGAT), which can effectively alleviate the oversmoothing issue in deep GAT and is strictly more expressive than GAT. We redesign the attention coefficients computation mechanism adaptively adjusted by layer depth, which considers both immediate neighboring and non-adjacent nodes from a global view. The experimental evaluation confirms that LSGAT consistently achieves better results on node classification tasks over relevant counterparts.
科研通智能强力驱动
Strongly Powered by AbleSci AI