反事实思维
计算机科学
加权
透视图(图形)
正规化(语言学)
反事实条件
倾向得分匹配
协同过滤
因果模型
心理干预
因果推理
计量经济学
机器学习
人工智能
心理学
数学
社会心理学
统计
推荐系统
放射科
精神科
医学
作者
Pengyang Shao,Le Wu,Kun Zhang,Defu Lian,Richang Hong,Yong Li,Meng Wang
摘要
Recently, the user-side fairness issue in Collaborative Filtering (CF) algorithms has gained considerable attention, arguing that results should not discriminate an individual or a sub-user group based on users’ sensitive attributes (e.g., gender). Researchers have proposed fairness-aware CF models by decreasing statistical associations between predictions and sensitive attributes. A more natural idea is to achieve model fairness from a causal perspective. The remaining challenge is that we have no access to interventions, i.e., the counterfactual world that produces recommendations when each user has changed the sensitive attribute value. To this end, we first borrow the Rubin-Neyman potential outcome framework to define average causal effects of sensitive attributes. Next, we show that removing causal effects of sensitive attributes is equal to average counterfactual fairness in CF. Then, we use the propensity re-weighting paradigm to estimate the average causal effects of sensitive attributes and formulate the estimated causal effects as an additional regularization term. To the best of our knowledge, we are one of the first few attempts to achieve counterfactual fairness from the causal effect estimation perspective in CF, which frees us from building sophisticated causal graphs. Finally, experiments on three real-world datasets show the superiority of our proposed model.
科研通智能强力驱动
Strongly Powered by AbleSci AI