计算机科学
人工智能
培训(气象学)
图形
人工神经网络
机器学习
理论计算机科学
地理
气象学
作者
Oscar Pina,Verónica Vilaplana
标识
DOI:10.1109/tnnls.2025.3577702
摘要
Training graph neural networks (GNNs) on large graphs is challenging due to both the high memory and computational costs of end-to-end training and the scarcity of detailed node-level annotations. To address these challenges, we propose layer-wise regularized graph infomax (LRGI), a self-supervised learning algorithm inspired by predictive coding, a biologically motivated principle in which each layer is trained locally to predict its future inputs. LRGI trains GNNs layer by layer, decoupling their memory and time complexity from the network depth, thereby enabling scalable training on large graphs. In LRGI, each layer learns to predict the features propagated from its neighbors, allowing independent training of each layer. This approach, combined with regularization that promotes diverse representations, also helps mitigate oversmoothing in deep GNNs. Experiments on large inductive graph benchmarks demonstrate that LRGI achieves competitive performance compared to state-of-the-art end-to-end methods, while substantially improving efficiency.
科研通智能强力驱动
Strongly Powered by AbleSci AI