计算机科学
强化学习
可解释性
情态动词
推论
图形
人工智能
路径(计算)
光学(聚焦)
动作(物理)
特征学习
机器学习
理论计算机科学
光学
物理
化学
高分子化学
程序设计语言
量子力学
作者
Shaohua Tao,Runhe Qiu,Yuan Ping,Hui Ma
标识
DOI:10.1016/j.knosys.2021.107217
摘要
Knowledge graphs (KGs) can provide rich, structured information for recommendation systems as well as increase accuracy and perform explicit reasoning. Deep reinforcement learning (RL) has also sparked great interest in personalized recommendations. The combination of the two holds promise in carrying out interpretable causal inference procedures and improving the performance of graph-structured recommendation. However, most KG-based recommendation focus on rich semantic relationships between entities in a heterogeneous knowledge graph, and thus fail to fully make use of the image information corresponding to an entity. In order to address these issues, we proposed a novel Multi-modal Knowledge-aware Reinforcement Learning Network (MKRLN), which couples recommendation and interpretability by providing actual paths in multi-modal KG (MKG). The MKRLN can generate path representation by composing the structural and visual information of entities, and infers the underlying rational of agent-MKG interactions by leveraging the sequential dependencies within a path from the MKG. In addition, as KGs have too many attributes and entities, their combination with RL leads to too many action spaces and states in the reinforcement learning space, which complicates the search of action spaces. Furthermore, in order to solve this problem, we proposed a new hierarchical attention-path, which makes users focus their attention on the items they are interested in. This reduces the relations and entities in the KGs, which in turn reduces the action space and state in RL, shortens the path to the target entity, and improves the accuracy of recommendation. Our model has explicit explanation ability in knowledge and images. Finally, we extensively evaluated our model on several large-scale real-world benchmark datasets, and it yielded favorable results compared with state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI