计算机科学
分类器(UML)
危害
噪音(视频)
缩小
实证研究
机器学习
人工智能
计量经济学
算法
数据挖掘
统计
数学
心理学
社会心理学
图像(数学)
程序设计语言
作者
Jialu Wang,Yang Liu,Caleb Levy
标识
DOI:10.1145/3442188.3445915
摘要
This work examines how to train fair classifiers in settings where training labels are corrupted with random noise, and where the error rates of corruption depend both on the label class and on the membership function for a protected subgroup. Heterogeneous label noise models systematic biases towards particular groups when generating annotations. We begin by presenting analytical results which show that naively imposing parity constraints on demographic disparity measures, without accounting for heterogeneous and group-dependent error rates, can decrease both the accuracy and the fairness of the resulting classifier. Our experiments demonstrate these issues arise in practice as well. We address these problems by performing empirical risk minimization with carefully defined surrogate loss functions and surrogate constraints that help avoid the pitfalls introduced by heterogeneous label noise. We provide both theoretical and empirical justifications for the efficacy of our methods. We view our results as an important example of how imposing fairness on biased data sets without proper care can do at least as much harm as it does good.
科研通智能强力驱动
Strongly Powered by AbleSci AI