Testing theory of mind in large language models and humans

心理学 认知科学 认知心理学 计算机科学
作者
James W. A. Strachan,Dalila Albergo,Giulia Borghini,Oriana Pansardi,Eugenio Scaliti,Saurabh Gupta,K. B. Saxena,Alessandro Rufo,Stefano Panzeri,G. Manzi,Michael S. A. Graziano,Cristina Becchio
出处
期刊:Nature Human Behaviour [Nature Portfolio]
卷期号:8 (7): 1285-1295 被引量:53
标识
DOI:10.1038/s41562-024-01882-z
摘要

At the core of what defines us as humans is the concept of theory of mind: the ability to track other people's mental states. The recent development of large language models (LLMs) such as ChatGPT has led to intense debate about the possibility that these models exhibit behaviour that is indistinguishable from human behaviour in theory of mind tasks. Here we compare human and LLM performance on a comprehensive battery of measurements that aim to measure different theory of mind abilities, from understanding false beliefs to interpreting indirect requests and recognizing irony and faux pas. We tested two families of LLMs (GPT and LLaMA2) repeatedly against these measures and compared their performance with those from a sample of 1,907 human participants. Across the battery of theory of mind tests, we found that GPT-4 models performed at, or even sometimes above, human levels at identifying indirect requests, false beliefs and misdirection, but struggled with detecting faux pas. Faux pas, however, was the only test where LLaMA2 outperformed humans. Follow-up manipulations of the belief likelihood revealed that the superiority of LLaMA2 was illusory, possibly reflecting a bias towards attributing ignorance. By contrast, the poor performance of GPT originated from a hyperconservative approach towards committing to conclusions rather than from a genuine failure of inference. These findings not only demonstrate that LLMs exhibit behaviour that is consistent with the outputs of mentalistic inference in humans but also highlight the importance of systematic testing to ensure a non-superficial comparison between human and artificial intelligences.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
老迟到的友菱完成签到,获得积分10
1秒前
xu完成签到,获得积分20
1秒前
一一应助冬日可爱采纳,获得10
2秒前
稀罕你完成签到,获得积分10
2秒前
刘彤完成签到,获得积分10
2秒前
3秒前
人生完成签到,获得积分10
3秒前
仿生人完成签到,获得积分10
3秒前
hersy完成签到,获得积分10
3秒前
zhangqy完成签到,获得积分10
3秒前
积极er关注了科研通微信公众号
4秒前
ffff完成签到,获得积分10
4秒前
归燕发布了新的文献求助10
4秒前
4秒前
一米八八完成签到 ,获得积分10
5秒前
滴滴哒完成签到,获得积分10
5秒前
火山蜗牛完成签到,获得积分10
5秒前
6秒前
正直如松发布了新的文献求助10
7秒前
msk完成签到 ,获得积分10
7秒前
楠薏完成签到,获得积分10
7秒前
liukuangxu完成签到,获得积分10
7秒前
叶子发布了新的文献求助10
7秒前
8秒前
盐水z完成签到,获得积分10
8秒前
xiaozhao完成签到,获得积分10
8秒前
sweat完成签到,获得积分20
9秒前
9秒前
机灵柚子应助starying采纳,获得10
10秒前
泡泡茶壶o完成签到 ,获得积分10
11秒前
11秒前
liukuangxu发布了新的文献求助10
11秒前
cdercder应助楠薏采纳,获得10
11秒前
12秒前
冬日可爱完成签到,获得积分10
12秒前
sweat发布了新的文献求助10
12秒前
华仔应助香辣鸡腿堡采纳,获得10
12秒前
无情的白桃完成签到,获得积分10
12秒前
12秒前
yyy完成签到,获得积分10
13秒前
高分求助中
Handbook of Diagnosis and Treatment of DSM-5-TR Personality Disorders 800
Algorithmic Mathematics in Machine Learning 500
Разработка метода ускоренного контроля качества электрохромных устройств 500
Advances in Underwater Acoustics, Structural Acoustics, and Computational Methodologies 400
建筑材料检测与应用 370
Getting Published in SSCI Journals: 200+ Questions and Answers for Absolute Beginners 300
The Monocyte-to-HDL ratio (MHR) as a prognostic and diagnostic biomarker in Acute Ischemic Stroke: A systematic review with meta-analysis (P9-14.010) 240
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 物理 生物化学 纳米技术 计算机科学 化学工程 内科学 复合材料 物理化学 电极 遗传学 量子力学 基因 冶金 催化作用
热门帖子
关注 科研通微信公众号,转发送积分 3830708
求助须知:如何正确求助?哪些是违规求助? 3373047
关于积分的说明 10477167
捐赠科研通 3093166
什么是DOI,文献DOI怎么找? 1702362
邀请新用户注册赠送积分活动 818956
科研通“疑难数据库(出版商)”最低求助积分说明 771173