计算机科学
情态动词
推荐系统
特征学习
嵌入
代表(政治)
机器学习
人工智能
图形
图嵌入
变压器
情报检索
理论计算机科学
工程类
化学
电气工程
电压
政治
政治学
高分子化学
法学
作者
Hao Wu,Jiajie Wang,Zhonglin Zu
标识
DOI:10.1109/icassp49357.2023.10095080
摘要
Personalized recommender systems have attracted significant attentions from both industry and academic. Recent studies have shed light on incorporating multi-modal side information into the recommender systems to further boost the performance. Meanwhile, transformer-based multi-modal representation learning has shown great enhancement for downstream visual and textual tasks. However, these self-supervised pre-training methods are not tailored for recommendation and may lead to suboptimal representations. To this end, we propose Interaction-Assisted Multi-Modal Representation Learning for Recommendation (IRL) to inject the information of user interactions into item multi-modal representation learning. Specifically, we extract item graph embedding through user-item interactions and then utilize it to formulate a novel triplet IRL training objective which serves as a behavior-aware pre-training task for the representation learning model. A range of experiments have been conducted on several real-world datasets and extensive results indicate the effectiveness of IRL.
科研通智能强力驱动
Strongly Powered by AbleSci AI