认知重构
操作化
心态
可信赖性
授权
认识论
心理学
计算机科学
知识管理
社会学
社会心理学
人工智能
政治学
哲学
法学
出处
期刊:Patterns
[Elsevier BV]
日期:2024-06-01
卷期号:5 (6): 100971-100971
被引量:24
标识
DOI:10.1016/j.patter.2024.100971
摘要
To make explainable artificial intelligence (XAI) systems trustworthy, understanding harmful effects is important. In this paper, we address an important yet unarticulated type of negative effect in XAI. We introduce explainability pitfalls (EPs), unanticipated negative downstream effects from AI explanations manifesting even when there is no intention to manipulate users. EPs are different from dark patterns, which are intentionally deceptive practices. We articulate the concept of EPs by demarcating it from dark patterns and highlighting the challenges arising from uncertainties around pitfalls. We situate and operationalize the concept using a case study that showcases how, despite best intentions, unsuspecting negative effects, such as unwarranted trust in numerical explanations, can emerge. We propose proactive and preventative strategies to address EPs at three interconnected levels: research, design, and organizational. We discuss design and societal implications around reframing AI adoption, recalibrating stakeholder empowerment, and resisting the "move fast and break things" mindset.
科研通智能强力驱动
Strongly Powered by AbleSci AI