透明度(行为)
上诉
情感(语言学)
计算机科学
心理学
经验证据
社会心理学
计算机安全
法学
政治学
哲学
沟通
认识论
作者
Kevin Bauer,Andrej Gill
标识
DOI:10.1287/isre.2023.1217
摘要
Predictive algorithmic scores can significantly impact the lives of assessed individuals by shaping decisions of organizations and institutions that affect them, for example, influencing the hiring prospects of job applicants or the release of defendants on bail. To better protect people and provide them the opportunity to appeal their algorithmic assessments, data privacy advocates and regulators increasingly push for disclosing the scores and their use in decision-making processes to scored individuals. Although inherently important, the response of scored individuals to such algorithmic transparency is understudied. Inspired by psychological and economic theories of information processing, we aim to fill this gap. We conducted a comprehensive empirical study to explore how and why disclosing the use of algorithmic scoring processes to (involuntarily) scored individuals affects their behaviors. Our results provide strong evidence that the disclosure of fundamentally erroneous algorithmic scores evokes self-fulfilling prophecies that endogenously steer the behavior of scored individuals toward their assessment, enabling algorithms to help produce the world they predict. Our results emphasize that isolated transparency measures can have considerable side effects with noticeable implications for the development of automation bias, the occurrence of feedback loops, and the design of transparency regulations.
科研通智能强力驱动
Strongly Powered by AbleSci AI