计算机科学
推荐系统
二部图
图形
理论计算机科学
人气
嵌入
人工智能
管道(软件)
机器学习
自然语言处理
心理学
社会心理学
程序设计语言
作者
Junliang Yu,Hongzhi Yin,Xinhui Xia,Tong Chen,Lizhen Cui,Quoc Viet Hung Nguyen
出处
期刊:Cornell University - arXiv
日期:2021-12-16
被引量:1
标识
DOI:10.48550/arxiv.2112.08679
摘要
Contrastive learning (CL) recently has spurred a fruitful line of research in the field of recommendation, since its ability to extract self-supervised signals from the raw data is well-aligned with recommender systems' needs for tackling the data sparsity issue. A typical pipeline of CL-based recommendation models is first augmenting the user-item bipartite graph with structure perturbations, and then maximizing the node representation consistency between different graph augmentations. Although this paradigm turns out to be effective, what underlies the performance gains is still a mystery. In this paper, we first experimentally disclose that, in CL-based recommendation models, CL operates by learning more evenly distributed user/item representations that can implicitly mitigate the popularity bias. Meanwhile, we reveal that the graph augmentations, which were considered necessary, just play a trivial role. Based on this finding, we propose a simple CL method which discards the graph augmentations and instead adds uniform noises to the embedding space for creating contrastive views. A comprehensive experimental study on three benchmark datasets demonstrates that, though it appears strikingly simple, the proposed method can smoothly adjust the uniformity of learned representations and has distinct advantages over its graph augmentation-based counterparts in terms of recommendation accuracy and training efficiency. The code is released at https://github.com/Coder-Yu/QRec.
科研通智能强力驱动
Strongly Powered by AbleSci AI