计算机科学
知识管理
战略规划
过程管理
人工智能
机器学习
业务
营销
作者
Hajime Shimao,Warut Khern-am-nuai,Karthik Kannan,Maxime C. Cohen
出处
期刊:Information Systems Research
[Institute for Operations Research and the Management Sciences]
日期:2025-04-07
卷期号:36 (4): 2391-2403
被引量:2
标识
DOI:10.1287/isre.2022.0055
摘要
This study introduces a framework called “strategic best-response fairness” (SBR-fairness) to address discrimination perpetuated by machine-learning (ML) algorithms. It challenges the conventional focus on fairness solely in prediction results, arguing that this approach ignores how individuals affected by the predictions may alter their behavior in response to algorithmic decisions. The framework considers whether an algorithm, trained on potentially biased data, leads to identical equilibrium behaviors across different subpopulations that are ex ante identical. The study finds that common fair-ML algorithms, such as those relying on color-blindness and demographic parity fairness criteria, do not always achieve SBR fairness. This means that they may not eliminate disparities in effort and outcomes. Equalized odds (EO), however, have been shown to achieve SBR fairness, but they suffer from several practical limitations. The study proposes that SBR fairness is a necessary condition for breaking cycles of discrimination in ML. It also argues that SBR fairness offers a complementary way to assess other fairness criteria and understand behavioral responses. The findings suggest a need for policy and practical focus on designing SBR-fair algorithms that promote equitable outcomes at both the prediction and behavioral level.
科研通智能强力驱动
Strongly Powered by AbleSci AI