误传
聊天机器人
造谣
计算机科学
杠杆(统计)
互联网隐私
说服
心理学
人机交互
万维网
社会心理学
社会化媒体
计算机安全
人工智能
作者
Elena Musi,Elinor Carmi,Chris Reed,Simeon Yates,Kay L. O’Halloran
标识
DOI:10.1177/20563051221150407
摘要
To counter the fake news phenomenon, the scholarly community has attempted to debunk and prebunk disinformation. However, misinformation still constitutes a major challenge due to the variety of misleading techniques and their continuous updates which call for the exercise of critical thinking to build resilience. In this study we present two open access chatbots, the Fake News Immunity Chatbot and the Vaccinating News Chatbot, which combine Fallacy Theory and Human–Computer Interaction to inoculate citizens and communication gatekeepers against misinformation. These chatbots differ from existing tools both in function and form. First, they target misinformation and enhance the identification of fallacious arguments; and second, they are multiagent and leverage discourse theories of persuasion in their conversational design. After having described both their backend and their frontend design, we report on the evaluation of the user interface and impact on users’ critical thinking skills through a questionnaire, a crowdsourced survey, and a pilot qualitative experiment. The results shed light on the best practices to design user-friendly active inoculation tools and reveal that the two chatbots are perceived as increasing critical thinking skills in the current misinformation ecosystem.
科研通智能强力驱动
Strongly Powered by AbleSci AI