反事实思维
领域(数学分析)
计算机科学
心理学
数学
社会心理学
数学分析
作者
Yi Yu,Kazunari Sugiyama,Adam Jatowt
摘要
Providing explanations for recommendation decisions is crucial for enhancing user trust and satisfaction in recommender systems. However, existing generative methods often produce generic, repetitive explanation texts that fail to reflect the true reasons behind user interests and item attributes. Thus, it is important to address this degeneration issue in recommendation explanations. This work tackles a key problem in explainable recommendation: understanding how explanation degeneration occurs and improving explanation quality by mitigating it. We argue that examining the causal mechanism underlying the data generation process is key to addressing this problem. Along this line, we identify a neglected hidden variable, which we refer to as textual attributes . Textual attributes encompass various aspects, such as text style, word frequency distributions, and more. Just like user persona and item attributes in traditional recommender systems, textual attributes also shape the nature of explanations. Our analysis of the causal graph reveals the underlying cause of the model's degeneration. To address this issue, we propose a novel learning method called Domain for Counterfactual Reasoning (D4C). By using the auxiliary domain to generate counterfactual data and combining it with factual data, this approach helps the model focus more on the causal contributions of users and items during training. Extensive experiments on five real-world datasets from various platforms demonstrate the effectiveness of our approach.
科研通智能强力驱动
Strongly Powered by AbleSci AI