借记
计算机科学
稳健性(进化)
参数统计
差异(会计)
插补(统计学)
泛化误差
一般化
人工智能
非参数统计
机器学习
杠杆(统计)
算法
数据挖掘
计量经济学
统计
缺少数据
数学
人工神经网络
心理学
会计
业务
数学分析
基因
化学
认知科学
生物化学
作者
Peng Wu,Haoxuan Li,Yan Lyu,Chunyuan Zheng,Xiao‐Hua Zhou
出处
期刊:Cornell University - arXiv
日期:2022-03-19
被引量:14
标识
DOI:10.48550/arxiv.2203.10258
摘要
Bias is a common problem inherent in recommender systems, which is entangled with users' preferences and poses a great challenge to unbiased learning. For debiasing tasks, the doubly robust (DR) method and its variants show superior performance due to the double robustness property, that is, DR is unbiased when either imputed errors or learned propensities are accurate. However, our theoretical analysis reveals that DR usually has a large variance. Meanwhile, DR would suffer unexpectedly large bias and poor generalization caused by inaccurate imputed errors and learned propensities, which usually occur in practice. In this paper, we propose a principled approach that can effectively reduce bias and variance simultaneously for existing DR approaches when the error imputation model is misspecified. In addition, we further propose a novel semi-parametric collaborative learning approach that decomposes imputed errors into parametric and nonparametric parts and updates them collaboratively, resulting in more accurate predictions. Both theoretical analysis and experiments demonstrate the superiority of the proposed methods compared with existing debiasing methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI