不公正
社会不公
社会心理学
心理学
计算机科学
政治学
法学
政治
作者
Hüseyi̇n Tanriverdi̇,John-Patrick Olatunji Akinyemi
标识
DOI:10.25300/misq/2025/18314
摘要
A key assumption in data science is that the fairness of an algorithm depends on its accuracy. Antecedents that create accuracy problems are expected to reduce fairness and cause algorithmic social injustices. We theorize why complexities in ground truths, IT ecosystems, and statistical models of algorithms can also generate algorithmic social injustices, above and beyond the indirect effects of antecedents, through the mediation of accuracy problems. We also theorize technology design and organizational mitigation mechanisms for taming such complexities and reducing algorithmic social injustices. We tested the proposed theory in a sample of 363 matched pairs of problematic and problem-free algorithms. We found that complexities in ground truths affected algorithmic social injustices directly rather than through the mediation of accuracy problems. Failures in complex IT ecosystems of algorithms did not affect the likelihood of algorithmic social injustices, but they caused damage directly and indirectly through the mediation of accuracy problems. Failures in complex statistical models significantly increased algorithmic social injustices both directly and indirectly through the mediation of accuracy problems. The results indicate that agentic algorithms produce social injustices not only through accuracy problems but also through complexities in their ground truths, IT ecosystems, and statistical models. The proposed complexity taming mechanisms are effective in reducing algorithmic social injustice risks through (1) the user organization’s quality in managing the algorithm’s stakeholders, (2) designing algorithms with a large scope of human-like interaction capabilities, (3) the developer organization’s algorithmic risk mitigations, and (4) the user organization’s algorithmic risk mitigations.
科研通智能强力驱动
Strongly Powered by AbleSci AI