生成语法
心理学
医疗保健
人工智能
计算机科学
业务
经济
经济增长
作者
LJ Jin,Zijun Shen,Anas Ali Alhur,Salman Bin Naeem
标识
DOI:10.1177/02666669251340954
摘要
Artificial intelligence (AI) hallucinations—erroneous outputs that generate misleading or nonsensical content—pose significant risks in contexts where consumers seek health information, as inaccuracies in this domain could lead to harmful outcomes. This study aims to explore determinants of AI hallucination exposure (HEX), examine the potential HEX's direct and mediating effects on adoption intentions, and integrate HEX into the Theory of Planned Behavior (TPB) to advance generative AI adoption models. An eight-factor measurement model, grounded in the TPB and incorporating constructs such as perceived usefulness, attitude toward AI, perceived risk, subjective norms, perceived behavioral control, user trust, AI hallucination exposure, and behavioral intention, was developed and tested using structural equation modeling (SEM). The study concludes that perceived behavioral control is a significant determinant of AI hallucination exposure (HEX), while subjective norms exert a direct influence on behavioral intention (BI) to adopt generative AI chatbots. AI hallucination exposure does not statistically significantly mediate the relationship between key antecedents and the use of generative AI chatbots. These findings advance the Theory of Planned Behavior (TPB) by integrating AI-specific risks like HEX while underscoring the need to refine theoretical models to account for contexts where technological reliability—rather than user perceptions alone—drives adoption decisions.
科研通智能强力驱动
Strongly Powered by AbleSci AI