透明度(行为)
钥匙(锁)
计算机科学
人机交互
质量(理念)
空格(标点符号)
可视化
计算机安全
人工智能
认识论
操作系统
哲学
作者
Jie Liu,Kim Marriott,Tim Dwyer,Guido Tack
摘要
User trust plays a key role in determining whether autonomous computer applications are relied upon. It will play a key role in the acceptance of emerging AI applications such as optimisation. Two important factors known to affect trust are system transparency, i.e., how well the user understands how the system works, and system performance. However, in the case of optimisation, it is difficult for the end-user to understand the underlying algorithms or to judge the quality of the solution. Through two controlled user studies, we explore whether the user is better able to calibrate their trust in the system when: (a) They are provided feedback on the system operation in the form of visualisation of intermediate solutions and their quality; (b) They can interactively explore the solution space by modifying the solution returned by the system. We found that showing intermediate solutions can lead to over-trust, while interactive exploration leads to more accurately calibrated trust.
科研通智能强力驱动
Strongly Powered by AbleSci AI