聊天机器人
模棱两可
能力(人力资源)
心理学
对话系统
背景(考古学)
任务(项目管理)
人机交互
计算机科学
认知心理学
社会心理学
对话框
万维网
管理
经济
程序设计语言
古生物学
生物
作者
Yi Jiang,Xiangcheng Yang,Tianqi Zheng
标识
DOI:10.1016/j.chb.2022.107485
摘要
As one of the most popular AI applications, chatbots are creating new ways and value for businesses to interact with their customers, and their adoption and continued use will depend on users’ trust. However, due to the non-transparent of AI-related technology and the ambiguity of application boundaries, it is difficult to determine which aspects enhance the adaptation of chatbots and how they interactively affect human trust. Based on the theory of task-technology fit, we developed a research model to investigate how two conversational cues of chatbots, human-like cues and tailored responses, influence human trust toward chatbots and to explore appropriate boundary conditions (individual characteristics and task characteristics) in interacting with chatbots. One survey and two experiments were performed to test the research model, and the results indicated that (1) perceived task solving competence and social presence mediate the pathway from conversational cues to human trust, which was validated in the context of e-commerce and education; (2) the extent of users’ ambiguity tolerance moderates the effects of two conversational cues on social presence; and (3) when performing high-creative tasks, the human-like chatbot induces higher perceived task solving competence. Our findings not only contribute to the AI trust-related literature but also provide practical implications for the development of chatbots and their assignment to individuals and tasks.
科研通智能强力驱动
Strongly Powered by AbleSci AI