计算机科学
成对比较
图形
嵌入
利用
机器学习
推荐系统
训练集
人工智能
数据挖掘
卷积神经网络
理论计算机科学
计算机安全
标识
DOI:10.1016/j.knosys.2023.110727
摘要
Graph Convolutional Networks (GCNs) gain success for recommendation, but still face great challenges of data sparseness and negative sampling in implicit feedback-based recommendation. In particular, they ignore the unique graph structure for information propagation and thus fail to fully explore the potentials of GCN. To tackle the problem, we propose a GCN-based self-training approach, called STL, which exploits the learning results in the training procedure, and the potential relations in the embedding space of GCN. First, to handle the data sparsity, we modify the interaction graph structure by adding edges linking a selected part of users and their potential positive items. Second, we adaptively expand the set of positive samples that are used in the pairwise loss function, which not only supplements the dataset but also avoids sampling noises. Further, similarity of structural neighbors on graph is used to mine hard negative sample for improving the sample quality. Experiments on three representative GCN-based recommenders and four widely used public datasets show that STL alleviates the problem of data sparsity, thereby improving recommendation performance compared to normal training.
科研通智能强力驱动
Strongly Powered by AbleSci AI