认知
心理学
认知心理学
社会心理学
认知科学
神经科学
出处
期刊:Cornell University - arXiv
日期:2024-04-14
被引量:1
标识
DOI:10.48550/arxiv.2404.09329
摘要
Large Language Models (LLMs) are already as persuasive as humans. However, we know very little about why. This paper investigates the persuasion strategies of LLMs, comparing them with human-generated arguments. Using a dataset of 1,251 participants in an experiment, we analyze the persuaion strategies of LLM-generated and human-generated arguments using measures of cognitive effort (lexical and grammatical complexity) and moral-emotional language (sentiment and moral analysis). The study reveals that LLMs produce arguments that require higher cognitive effort, exhibiting more complex grammatical and lexical structures than human counterparts. Additionally, LLMs demonstrate a significant propensity to engage more deeply with moral language, utilizing both positive and negative moral foundations more frequently than humans. In contrast with previous research, no significant difference was found in the emotional content produced by LLMs and humans. These findings contribute to the discourse on AI and persuasion, highlighting the dual potential of LLMs to both enhance and undermine informational integrity through communication strategies for digital persuasion.
科研通智能强力驱动
Strongly Powered by AbleSci AI