计算机科学
推荐系统
违反直觉
机器学习
生成模型
图形
人工智能
情报检索
生成语法
理论计算机科学
认识论
哲学
作者
Thanh Toan Nguyen,Nguyen Duc Khang Quach,Thành Tâm Nguyên,Thanh Trung Huynh,Viet Hung Vu,Phi Le Nguyen,Jun Jo,Quoc Viet Hung Nguyen
摘要
With recent advancements in graph neural networks (GNN), GNN-based recommender systems (gRS) have achieved remarkable success in the past few years. Despite this success, existing research reveals that gRSs are still vulnerable to poison attacks , in which the attackers inject fake data to manipulate recommendation results as they desire. This might be due to the fact that existing poison attacks (and countermeasures) are either model-agnostic or specifically designed for traditional recommender algorithms (e.g., neighborhood-based, matrix-factorization-based, or deep-learning-based RSs) that are not gRS. As gRSs are widely adopted in the industry, the problem of how to design poison attacks for gRSs has become a need for robust user experience. Herein, we focus on the use of poison attacks to manipulate item promotion in gRSs. Compared to standard GNNs, attacking gRSs is more challenging due to the heterogeneity of network structure and the entanglement between users and items. To overcome such challenges, we propose GSPAttack —a generative surrogate-based poison attack framework for gRSs. GSPAttack tailors a learning process to surrogate a recommendation model as well as generate fake users and user-item interactions while preserving the data correlation between users and items for recommendation accuracy. Although maintaining high accuracy for other items rather than the target item seems counterintuitive, it is equally crucial to the success of a poison attack. Extensive evaluations on four real-world datasets revealed that GSPAttack outperforms all baselines with competent recommendation performance and is resistant to various countermeasures.
科研通智能强力驱动
Strongly Powered by AbleSci AI