实证研究
商誉
心理学
社会心理学
服务(商务)
情商
应用心理学
社会支持
社会关系
知识管理
中国
经验证据
人力资源
人工智能应用
社会关系
社会影响力
服务提供商
情感(语言学)
边界(拓扑)
情感支持
作者
Xin Zhang,Shenghui Liu,Liang Ma,Feifei Hao,Ge Zhang
标识
DOI:10.1108/ajim-04-2025-0192
摘要
Purpose The interaction between humans and artificial intelligence (AI) is becoming increasingly common. However, whether humans trust AI more or doctors more is still not clear. The purpose of this paper is to investigate whether there are significant differences in patient trust between AI and human doctors? What are the mechanisms and boundary conditions. Design/methodology/approach Based on social support theory, using experimental methods, this study investigates the trust (competence, goodwill and integrity) differences between AI doctors and human doctors by analyzing 236 online experiment participants’ data. Findings The results indicate that compared to AI doctors, patients exhibit higher levels of trust in human doctors across all three dimensions. Furthermore, informational support and emotional support mediate the relationship between service agent types (AI vs human) and patient trust. Response length moderates the impact of service agent types on these two forms of social support. Originality/value These findings contribute to a deeper understanding of the relationship between human–AI interaction and patient trust by investigating the mediating role of informational support and emotional support, as well as the moderating role of response length. The study also provides practical implications for how to improve patient trust in AI through optimized AI design.
科研通智能强力驱动
Strongly Powered by AbleSci AI