背景(考古学)
利益相关者
知识管理
可信赖性
公众信任
认知
计算机科学
顺从(心理学)
心理学
公共关系
社会心理学
政治学
古生物学
神经科学
生物
作者
Oleksandra Vereschak,Fatemeh Alizadeh,Gilles Bailly,Baptiste Caramiaux
标识
DOI:10.1145/3613904.3642018
摘要
Trust between humans and AI in the context of decision-making has acquired an important role in public policy, research and industry. In this context, Human-AI Trust has often been tackled from the lens of cognitive science and psychology, but lacks insights from the stakeholders involved. In this paper, we conducted semi-structured interviews with 7 AI practitioners and 7 decision subjects from various decision domains. We found that 1) interviewees identified the prerequisites for the existence of trust and distinguish trust from trustworthiness, reliance, and compliance; 2) trust in AI-integrated systems is strongly influenced by other human actors, more than the system's features; 3) the role of Human-AI trust factors is stakeholder-dependent. These results provide clues for the design of Human-AI interactions in which trust plays a major role, as well as outline new research directions in Human-AI Trust.
科研通智能强力驱动
Strongly Powered by AbleSci AI