计算机科学
透视图(图形)
拓扑(电路)
人工神经网络
网络拓扑
特征(语言学)
图形
融合
人工智能
理论计算机科学
数学
计算机网络
组合数学
语言学
哲学
作者
Lanning Wei,Huan Zhao,Zhiqiang He
标识
DOI:10.1145/3485447.3512185
摘要
In recent years, Graph Neural Networks (GNNs) have shown superior performance on diverse real-world applications. To improve the model capacity, besides designing aggregation operations, GNN topology design is also very important. In general, there are two mainstream GNN topology design manners. The first one is to stack aggregation operations to obtain the higher-level features but easily got performance drop as the network goes deeper. Secondly, the multiple aggregation operations are utilized in each layer which provides adequate and independent feature extraction stage on local neighbors while are costly to obtain the higher-level information. To enjoy the benefits while alleviating the corresponding deficiencies of these two manners, we learn to design the topology of GNNs in a novel feature fusion perspective which is dubbed F2GNN. To be specific, we provide a feature fusion perspective in designing GNN topology and propose a novel framework to unify the existing topology designs with feature selection and fusion strategies. Then we develop a neural architecture search method on top of the unified framework which contains a set of selection and fusion operations in the search space and an improved differentiable search algorithm. The performance gains on diverse datasets, five homophily and three heterophily ones, demonstrate the effectiveness of F2GNN. We further conduct experiments to show that F2GNN can improve the model capacity while alleviating the deficiencies of existing GNN topology design manners, especially alleviating the over-smoothing problem, by utilizing different levels of features adaptively. 1
科研通智能强力驱动
Strongly Powered by AbleSci AI