Large Language Models are Zero-Shot Reasoners

弹丸 任务(项目管理) 水准点(测量) 计算机科学 零(语言学) 人工智能 自然语言处理 认知心理学 语言学 心理学 工程类 地理 大地测量学 化学 有机化学 系统工程 哲学
作者
Takeshi Kojima,Shixiang Gu,Machel Reid,Yutaka Matsuo,Yusuke Iwasawa
出处
期刊:Cornell University - arXiv 被引量:804
标识
DOI:10.48550/arxiv.2205.11916
摘要

Pretrained large language models (LLMs) are widely used in many sub-fields of natural language processing (NLP) and generally known as excellent few-shot learners with task-specific exemplars. Notably, chain of thought (CoT) prompting, a recent technique for eliciting complex multi-step reasoning through step-by-step answer examples, achieved the state-of-the-art performances in arithmetics and symbolic reasoning, difficult system-2 tasks that do not follow the standard scaling laws for LLMs. While these successes are often attributed to LLMs' ability for few-shot learning, we show that LLMs are decent zero-shot reasoners by simply adding "Let's think step by step" before each answer. Experimental results demonstrate that our Zero-shot-CoT, using the same single prompt template, significantly outperforms zero-shot LLM performances on diverse benchmark reasoning tasks including arithmetics (MultiArith, GSM8K, AQUA-RAT, SVAMP), symbolic reasoning (Last Letter, Coin Flip), and other logical reasoning tasks (Date Understanding, Tracking Shuffled Objects), without any hand-crafted few-shot examples, e.g. increasing the accuracy on MultiArith from 17.7% to 78.7% and GSM8K from 10.4% to 40.7% with large InstructGPT model (text-davinci-002), as well as similar magnitudes of improvements with another off-the-shelf large model, 540B parameter PaLM. The versatility of this single prompt across very diverse reasoning tasks hints at untapped and understudied fundamental zero-shot capabilities of LLMs, suggesting high-level, multi-task broad cognitive capabilities may be extracted by simple prompting. We hope our work not only serves as the minimal strongest zero-shot baseline for the challenging reasoning benchmarks, but also highlights the importance of carefully exploring and analyzing the enormous zero-shot knowledge hidden inside LLMs before crafting finetuning datasets or few-shot exemplars.

科研通智能强力驱动
Strongly Powered by AbleSci AI

祝大家在新的一年里科研腾飞
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
雲樂完成签到 ,获得积分10
2秒前
啾啾出去玩完成签到,获得积分10
6秒前
6秒前
开朗的雁完成签到,获得积分10
6秒前
共享精神应助silencer采纳,获得10
7秒前
7秒前
南湖完成签到 ,获得积分10
8秒前
无私如松完成签到,获得积分10
8秒前
9秒前
小小完成签到,获得积分10
9秒前
ww完成签到 ,获得积分10
9秒前
风中的香萱完成签到 ,获得积分10
10秒前
王jj发布了新的文献求助10
11秒前
大个应助安静的成风采纳,获得30
13秒前
侦察兵发布了新的文献求助10
14秒前
化工牛马人应助hjhhjh采纳,获得10
14秒前
偏偏海完成签到,获得积分10
16秒前
OU发布了新的文献求助50
16秒前
勤恳的冰露完成签到,获得积分20
17秒前
18秒前
21秒前
化工牛马人应助hjhhjh采纳,获得10
26秒前
蓬蒿人发布了新的文献求助10
27秒前
27秒前
GPTea完成签到,获得积分0
30秒前
zz完成签到 ,获得积分10
31秒前
慕容博完成签到 ,获得积分0
31秒前
31秒前
超帅千万完成签到,获得积分20
32秒前
kyt完成签到,获得积分10
32秒前
WEIHAO完成签到,获得积分10
32秒前
33秒前
可燃冰完成签到,获得积分10
34秒前
yaoyao6688完成签到,获得积分10
37秒前
小面包儿发布了新的文献求助10
37秒前
CodeCraft应助JKL采纳,获得10
38秒前
38秒前
思源应助研友_ZGmVjL采纳,获得10
39秒前
40秒前
OU完成签到,获得积分10
42秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Les Mantodea de guyane 2500
Signals, Systems, and Signal Processing 510
Discrete-Time Signals and Systems 510
The Dance of Butch/Femme: The Complementarity and Autonomy of Lesbian Gender Identity 500
Driving under the influence: Epidemiology, etiology, prevention, policy, and treatment 500
Differentiation Between Social Groups: Studies in the Social Psychology of Intergroup Relations 350
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5877125
求助须知:如何正确求助?哪些是违规求助? 6539957
关于积分的说明 15680651
捐赠科研通 4995774
什么是DOI,文献DOI怎么找? 2692324
邀请新用户注册赠送积分活动 1634509
关于科研通互助平台的介绍 1592189