可解释性
不信任
计算机科学
实证研究
推荐系统
人工智能
知识管理
机器学习
心理学
数学
统计
心理治疗师
作者
GaoShan Wang,XiQuan Liu,ZhongGuo Wang,Xuelan Yang
出处
期刊:Proceedings of the 2020 4th International Conference on Electronic Information Technology and Computer Engineering
日期:2020-11-06
被引量:1
标识
DOI:10.1145/3443467.3443850
摘要
The complexity and unexplainability of the artificial intelligence recommendation system make it difficult for ordinary users to understand its operating principle, often triggering two extreme emotions-extreme distrust and excessive trust. Therefore, it is very important to find out the factors that influence the behavior intentions of users in the artificial intelligence recommendation system and examine its effectiveness through empirical research. Taking technology acceptance theory as the research framework, and considering procedural fairness and system interpretability, this paper develops the usage intention model of artificial intelligence recommendation system, and carries out empirical research with the AMOS software. The empirical results show that the interpretability of the artificial intelligence recommendation system has a significant positive impact on user trust, usefulness, and ease of use; and procedural fairness has significant effect on trust and usefulness. The research results are helpful to further understand the impact of procedural fairness and interpretability on system usefulness and user behavior of artificial intelligence recommendation system, and offer guidance for further improvement of artificial intelligence recommendation system.
科研通智能强力驱动
Strongly Powered by AbleSci AI