计算机科学
图形
理论计算机科学
特征学习
人工智能
编码器
机器学习
操作系统
标识
DOI:10.1109/tnnls.2023.3278183
摘要
Graph neural networks (GNNs) have been successful in a variety of graph-based applications. Recently, it is shown that capturing long-range relationships between nodes helps improve the performance of GNNs. The phenomenon is mostly confirmed in a supervised learning setting. In this article, inspired by contrastive learning (CL), we propose an unsupervised learning pipeline, in which different types of long-range similarity information are injected into the GNN model in an efficient way. We reconstruct the original graph in feature and topology spaces to generate three augmented views. During training, our model alternately picks an augmented view, and maximizes an agreement between the representations of the view and the original graph. Importantly, we identify the issue of diminishing utility of the augmented views as the model gradually learns useful information from the views. Hence, we propose a view update scheme that adaptively adjusts the augmented views, so that the views can continue to provide new information that helps with CL. The updated augmented views and the original graph are jointly used to train a shared GNN encoder by optimizing an efficient channel-level contrastive objective. We conduct extensive experiments on six assortative graphs and three disassortative graphs, which demonstrate the effectiveness of our method.
科研通智能强力驱动
Strongly Powered by AbleSci AI