已入深夜,您辛苦了!由于当前在线用户较少,发布求助请尽量完整地填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!祝你早点完成任务,早点休息,好梦!

Large language models show amplified cognitive biases in moral decision-making

心理学 动作(物理) 社会心理学 道德推理 道德困境 量子力学 物理
作者
Vanessa Cheung,Maximilian Maier,Falk Lieder
出处
期刊:Proceedings of the National Academy of Sciences of the United States of America [National Academy of Sciences]
卷期号:122 (25) 被引量:5
标识
DOI:10.1073/pnas.2412015122
摘要

As large language models (LLMs) become more widely used, people increasingly rely on them to make or advise on moral decisions. Some researchers even propose using LLMs as participants in psychology experiments. It is, therefore, important to understand how well LLMs make moral decisions and how they compare to humans. We investigated these questions by asking a range of LLMs to emulate or advise on people’s decisions in realistic moral dilemmas. In Study 1, we compared LLM responses to those of a representative U.S. sample ( N = 285) for 22 dilemmas, including both collective action problems that pitted self-interest against the greater good, and moral dilemmas that pitted utilitarian cost–benefit reasoning against deontological rules. In collective action problems, LLMs were more altruistic than participants. In moral dilemmas, LLMs exhibited stronger omission bias than participants: They usually endorsed inaction over action. In Study 2 ( N = 474, preregistered), we replicated this omission bias and documented an additional bias: Unlike humans, most LLMs were biased toward answering “no” in moral dilemmas, thus flipping their decision/advice depending on how the question is worded. In Study 3 ( N = 491, preregistered), we replicated these biases in LLMs using everyday moral dilemmas adapted from forum posts on Reddit. In Study 4, we investigated the sources of these biases by comparing models with and without fine-tuning, showing that they likely arise from fine-tuning models for chatbot applications. Our findings suggest that uncritical reliance on LLMs’ moral decisions and advice could amplify human biases and introduce potentially problematic biases.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
4秒前
6秒前
Seciy完成签到 ,获得积分10
7秒前
李爱国应助窝恁叠采纳,获得10
7秒前
小curry完成签到,获得积分10
8秒前
yingying完成签到 ,获得积分10
8秒前
酷酷问夏完成签到 ,获得积分10
9秒前
10秒前
1111发布了新的文献求助10
12秒前
小zz完成签到 ,获得积分10
13秒前
mojunen发布了新的文献求助10
13秒前
14秒前
15秒前
16秒前
坦率晓霜完成签到,获得积分10
18秒前
窝恁叠发布了新的文献求助10
22秒前
怂怂鼠完成签到,获得积分10
23秒前
Xu完成签到 ,获得积分10
27秒前
张先生2365完成签到,获得积分10
27秒前
29秒前
破茧完成签到,获得积分20
29秒前
户户得振完成签到,获得积分10
31秒前
破茧发布了新的文献求助30
33秒前
疯狂喵完成签到 ,获得积分10
35秒前
隋中旭完成签到 ,获得积分10
36秒前
mojunen发布了新的文献求助10
36秒前
善良的花菜完成签到 ,获得积分10
39秒前
Milton_z完成签到 ,获得积分0
39秒前
40秒前
小马甲应助科研通管家采纳,获得30
40秒前
SciGPT应助科研通管家采纳,获得10
40秒前
科研通AI2S应助科研通管家采纳,获得10
40秒前
Hello应助科研通管家采纳,获得10
40秒前
搜集达人应助破茧采纳,获得10
41秒前
41秒前
浮游应助加菲丰丰采纳,获得10
43秒前
JenifferF完成签到,获得积分10
43秒前
43秒前
Ricardo完成签到 ,获得积分10
44秒前
44秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Handbook of Milkfat Fractionation Technology and Application, by Kerry E. Kaylegian and Robert C. Lindsay, AOCS Press, 1995 1000
A novel angiographic index for predicting the efficacy of drug-coated balloons in small vessels 500
Textbook of Neonatal Resuscitation ® 500
The Affinity Designer Manual - Version 2: A Step-by-Step Beginner's Guide 500
Affinity Designer Essentials: A Complete Guide to Vector Art: Your Ultimate Handbook for High-Quality Vector Graphics 500
Optimisation de cristallisation en solution de deux composés organiques en vue de leur purification 500
热门求助领域 (近24小时)
化学 医学 生物 材料科学 工程类 有机化学 内科学 生物化学 物理 计算机科学 纳米技术 遗传学 基因 复合材料 化学工程 物理化学 病理 催化作用 免疫学 量子力学
热门帖子
关注 科研通微信公众号,转发送积分 5076702
求助须知:如何正确求助?哪些是违规求助? 4296134
关于积分的说明 13386477
捐赠科研通 4118231
什么是DOI,文献DOI怎么找? 2255223
邀请新用户注册赠送积分活动 1259733
关于科研通互助平台的介绍 1192724