计算机科学
消息传递
图形
加速
水准点(测量)
推荐系统
理论计算机科学
约束(计算机辅助设计)
简单(哲学)
趋同(经济学)
GSM演进的增强数据速率
编码(集合论)
人工智能
并行计算
机器学习
程序设计语言
数学
哲学
经济
集合(抽象数据类型)
地理
认识论
经济增长
大地测量学
几何学
作者
Kelong Mao,Jieming Zhu,Xi Xiao,Biao Lu,Zhaowei Wang,Xiuqiang He
出处
期刊:Cornell University - arXiv
日期:2021-01-01
被引量:29
标识
DOI:10.48550/arxiv.2110.15114
摘要
With the recent success of graph convolutional networks (GCNs), they have been widely applied for recommendation, and achieved impressive performance gains. The core of GCNs lies in its message passing mechanism to aggregate neighborhood information. However, we observed that message passing largely slows down the convergence of GCNs during training, especially for large-scale recommender systems, which hinders their wide adoption. LightGCN makes an early attempt to simplify GCNs for collaborative filtering by omitting feature transformations and nonlinear activations. In this paper, we take one step further to propose an ultra-simplified formulation of GCNs (dubbed UltraGCN), which skips infinite layers of message passing for efficient recommendation. Instead of explicit message passing, UltraGCN resorts to directly approximate the limit of infinite-layer graph convolutions via a constraint loss. Meanwhile, UltraGCN allows for more appropriate edge weight assignments and flexible adjustment of the relative importances among different types of relationships. This finally yields a simple yet effective UltraGCN model, which is easy to implement and efficient to train. Experimental results on four benchmark datasets show that UltraGCN not only outperforms the state-of-the-art GCN models but also achieves more than 10x speedup over LightGCN. Our source code will be available at https://reczoo.github.io/UltraGCN.
科研通智能强力驱动
Strongly Powered by AbleSci AI