计算机科学
融合
图形
人工智能
情报检索
理论计算机科学
语言学
哲学
作者
Kai Li,Long Xu,Cheng Zhu,Kunlun Zhang
出处
期刊:Mathematics
[Multidisciplinary Digital Publishing Institute]
日期:2024-07-28
卷期号:12 (15): 2353-2353
被引量:2
摘要
Research on recommendation methods using multimodal graph information presents a significant challenge within the realm of information services. Prior studies in this area have lacked precision in the purification and denoising of multimodal information and have insufficiently explored fusion methods. We introduce a multimodal graph recommendation approach leveraging cross-attention fusion. This model enhances and purifies multimodal information by embedding the IDs of items and their corresponding interactive users, thereby optimizing the utilization of such information. To facilitate better integration, we propose a cross-attention mechanism-based multimodal information fusion method, which effectively processes and merges related and differential information across modalities. Experimental results on three public datasets indicated that our model performed exceptionally well, demonstrating its efficacy in leveraging multimodal information.
科研通智能强力驱动
Strongly Powered by AbleSci AI