亲爱的研友该休息了!由于当前在线用户较少,发布求助请尽量完整地填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!身体可是革命的本钱,早点休息,好梦!

The Dark Side of Machine Learning Algorithms

计算机科学 大裂谷 算法 人工智能 机器学习 物理 天文
作者
Mariya I. Vasileva
标识
DOI:10.1145/3394486.3411068
摘要

Machine learning and access to big data are revolutionizing the way many industries operate, providing analytics and automation to many aspects of real-world practical tasks that were previously thought to be necessarily manual. With the pervasiveness of artificial intelligence and machine learning over the past decade, and their epidemic spread in a variety of applications, algorithmic fairness has become a prominent open research problem. For instance, machine learning is used in courts to assess the probability that a defendant recommits a crime; in the medical domain to assist with diagnosis or predict predisposition to certain diseases; in social welfare systems; and autonomous vehicles. The decision making processes in these real-world applications have a direct effect on people's lives, and can cause harm to society if the machine learning algorithms deployed are not designed with considerations to fairness. The ability to collect and analyze large datasets for problems in many domains brings forward the danger of implicit data bias, which could be harmful. Data, especially big data, is often heterogeneous, generated by different subgroups with their owncharacteristics and behaviors. Furthermore, data collection strategies vary vastly across domains, and labelling of examples is performed by human annotators, thus causing the labelling process to amplify inherent biases the annotators might harbor. A model learned on biased data may not only lead to unfair and inaccurate predictions, but also significantly disadvantage certain subgroups, and lead to unfairness in downstream learning tasks. There aremultiple ways in which discriminatory bias can seep into data: for example, in medical domains, there are many instances in whichthe data used are skewed toward certain populations-which canhave dangerous consequences for the underrepresented communities [1]. Another example are large-scale datasets widely used in machine learning tasks, like ImageNet and Open Images: [2] shows that these datasets suffer from representation bias, and advocates for the need to incorporate geo-diversity and inclusion. Yet another example are the popular face recognition and generation datasets like CelebA and Flickr-Faces-HQ, where the ethnic and racial breakdown of example faces shows significant representation bias, evident in downstream tasks like face reconstruction from an obfuscated image [8]. In order to be able to fight discriminatory use of machine learning algorithms that leverage such biases, one needs to first define the notion of algorithmic fairness. Broadly, fairness is the absence of any prejudice or favoritism towards an individual or a group based on their intrinsic or acquired traits in the context of decision making [3]. Fairness definitions fall under three broad types: individual fairness (whereby similar predictions are given to similar individuals [4, 5]), group fairness (whereby different groups are treated equally [4, 5]), and subgroup fairness (whereby a group fairness constraint is being selected, and the task is to determine whether the constraint holds over a large collection of subgroups [6, 7]). In this talk, I will discuss a formal definition of these fairness constraints, examine the ways in which machine learning algorithms can amplify representation bias, and discuss how bias in both the example set and label set of popular datasets has been misused in a discriminatory manner. I will touch upon the issues of ethics and accountability, and present open research directions for tackling algorithmic fairness at the representation level.

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
开放道天发布了新的文献求助10
刚刚
6秒前
MchemG应助科研通管家采纳,获得200
10秒前
10秒前
14秒前
34秒前
38秒前
46秒前
47秒前
YCCC完成签到,获得积分10
1分钟前
朴素的山蝶完成签到 ,获得积分10
1分钟前
晨云完成签到,获得积分10
1分钟前
1分钟前
耿耿完成签到,获得积分10
1分钟前
1分钟前
耿耿发布了新的文献求助10
1分钟前
1分钟前
caca完成签到,获得积分0
1分钟前
1分钟前
YCCC发布了新的文献求助10
1分钟前
1分钟前
ok完成签到,获得积分10
1分钟前
小湛湛完成签到 ,获得积分10
1分钟前
2分钟前
传奇3应助科研通管家采纳,获得10
2分钟前
在水一方应助科研通管家采纳,获得10
2分钟前
CipherSage应助科研通管家采纳,获得10
2分钟前
搜集达人应助科研通管家采纳,获得10
2分钟前
2分钟前
希望天下0贩的0应助QQWQEQRQ采纳,获得10
2分钟前
Lucas应助sss采纳,获得10
2分钟前
2分钟前
2分钟前
QQWQEQRQ发布了新的文献求助10
2分钟前
sss完成签到,获得积分20
2分钟前
孤独的不凡应助QQWQEQRQ采纳,获得20
2分钟前
2分钟前
QQWQEQRQ完成签到,获得积分20
2分钟前
sss发布了新的文献求助10
2分钟前
1212完成签到,获得积分10
3分钟前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Basic And Clinical Science Course 2025-2026 3000
《药学类医疗服务价格项目立项指南(征求意见稿)》 880
花の香りの秘密―遺伝子情報から機能性まで 800
Stop Talking About Wellbeing: A Pragmatic Approach to Teacher Workload 500
Terminologia Embryologica 500
Silicon in Organic, Organometallic, and Polymer Chemistry 500
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5617027
求助须知:如何正确求助?哪些是违规求助? 4701398
关于积分的说明 14913514
捐赠科研通 4748350
什么是DOI,文献DOI怎么找? 2549251
邀请新用户注册赠送积分活动 1512325
关于科研通互助平台的介绍 1474080