计算机科学
推荐系统
透明度(行为)
功能(生物学)
样品(材料)
光学(聚焦)
领域(数学)
基本事实
机器学习
人工智能
数据科学
计算机安全
化学
物理
数学
色谱法
进化生物学
纯数学
光学
生物
作者
Jingsen Zhang,Xiaohe Bo,Chenxi Wang,Quanyu Dai,Zhenhua Dong,Ruiming Tang,Xu Chen
标识
DOI:10.1109/icassp48485.2024.10446052
摘要
Explainable recommendation has gained significant attention due to its potential to enhance user trust and system transparency. Previous studies primarily focus on refining model architectures to generate more informative explanations, assuming that the explanation data is sufficient and easy to acquire. However, in practice, obtaining the ground truth for explanations can be costly since individuals may not be inclined to put additional efforts to provide behavior explanations. In this paper, we study a novel problem in the field of explainable recommendation, that is, "given a limited budget to incentivize users to provide behavior explanations, how to effectively collect data such that the downstream models can be better optimized?" To solve this problem, we propose an active learning framework for recommender system, which consists of an acquisition function for sample collection and an explainable recommendation model to provide the final results. We consider both uncertainty and influence based strategies to design the acquisition function, which can determine the sample effectiveness from complementary perspectives. To demonstrate the effectiveness of our framework, we conduct extensive experiments based on real-world datasets.
科研通智能强力驱动
Strongly Powered by AbleSci AI