礼貌
自然语言处理
心理学
计算机科学
人工智能
语言学
哲学
作者
Richard Pak,Ericka Rovira,Anne Collins McLaughlin
出处
期刊:Ergonomics
[Taylor & Francis]
日期:2024-11-28
卷期号:: 1-11
被引量:1
标识
DOI:10.1080/00140139.2024.2434604
摘要
With their increased capability, AI-based chatbots have become increasingly popular tools to help users answer complex queries. However, these chatbots may hallucinate, or generate incorrect but very plausible-sounding information, more frequently than previously thought. Thus, it is crucial to examine strategies to mitigate human susceptibility to hallucinated output. In a between-subjects experiment, participants completed a difficult quiz with assistance from either a polite or neutral-toned AI chatbot, which occasionally provided hallucinated (incorrect) information. Signal detection analysis revealed that participants interacting with polite-AI showed modestly higher sensitivity in detecting hallucinations and a more conservative response bias compared to those interacting with neutral-toned AI. While the observed effect sizes were modest, even small improvements in users' ability to detect AI hallucinations can have significant consequences, particularly in high-stakes domains or when aggregated across millions of AI interactions.
科研通智能强力驱动
Strongly Powered by AbleSci AI