CDR: Conservative Doubly Robust Learning for Debiased Recommendation

Abstract

In recommendation systems (RS), user behavior data is observational rather than experimental, resulting in widespread bias in the data. Consequently, tackling bias has emerged as a major challenge in the field of recommendation systems. Recently, Doubly Robust Learning (DR) has gained significant attention due to its remarkable performance and robust properties. However, our experimental findings indicate that existing DR methods are severely impacted by the presence of so-called Poisonous Imputation, where the imputation significantly deviates from the truth and becomes counterproductive. To address this issue, this work proposes Conservative Doubly Robust strategy (CDR) which filters imputations by scrutinizing their mean and variance. Theoretical analyses show that CDR offers reduced variance and improved tail bounds.In addition, our experimental investigations illustrate that CDR significantly enhances performance and can indeed reduce the frequency of poisonous imputation.

Publication
In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management
Jiawei Chen
Jiawei Chen
陈佳伟 研究员
Qihao Shi
Qihao Shi
史麒豪 副研究员
Yan Feng
Yan Feng
冯雁 副教授
Chun Chen
Chun Chen
陈纯 院士
Can Wang
Can Wang
王灿 教授