计算机科学
消息传递
加速
图形
趋同(经济学)
推荐系统
水准点(测量)
约束(计算机辅助设计)
理论计算机科学
GSM演进的增强数据速率
简单(哲学)
人工智能
分布式计算
并行计算
机器学习
数学
哲学
经济
认识论
地理
经济增长
大地测量学
几何学
作者
Kelong Mao,Jieming Zhu,Xi Xiao,Biao Lu,Zhaowei Wang,Xiuqiang He
标识
DOI:10.1145/3459637.3482291
摘要
With the recent success of graph convolutional networks (GCNs), they have been widely applied for recommendation, and achieved impressive performance gains. The core of GCNs lies in its message passing mechanism to aggregate neighborhood information. However, we observed that message passing largely slows down the convergence of GCNs during training, especially for large-scale recommender systems, which hinders their wide adoption. LightGCN makes an early attempt to simplify GCNs for collaborative filtering by omitting feature transformations and nonlinear activations. In this paper, we take one step further to propose an ultra-simplified formulation of GCNs (dubbed UltraGCN), which skips infinite layers of message passing for efficient recommendation. Instead of explicit message passing, UltraGCN resorts to directly approximate the limit of infinite-layer graph convolutions via a constraint loss. Meanwhile, UltraGCN allows for more appropriate edge weight assignments and flexible adjustment of the relative importances among different types of relationships. This finally yields a simple yet effective UltraGCN model, which is easy to implement and efficient to train. Experimental results on four benchmark datasets show that UltraGCN not only outperforms the state-of-the-art GCN models but also achieves more than 10x speedup over LightGCN.
科研通智能强力驱动
Strongly Powered by AbleSci AI