Large Language Models are Zero-Shot Reasoners

弹丸 任务(项目管理) 水准点(测量) 计算机科学 零(语言学) 人工智能 自然语言处理 认知心理学 语言学 心理学 工程类 地理 化学 哲学 有机化学 大地测量学 系统工程
作者
Takeshi Kojima,Shixiang Gu,Machel Reid,Yutaka Matsuo,Yusuke Iwasawa
出处
期刊:Cornell University - arXiv 被引量:804
标识
DOI:10.48550/arxiv.2205.11916
摘要

Pretrained large language models (LLMs) are widely used in many sub-fields of natural language processing (NLP) and generally known as excellent few-shot learners with task-specific exemplars. Notably, chain of thought (CoT) prompting, a recent technique for eliciting complex multi-step reasoning through step-by-step answer examples, achieved the state-of-the-art performances in arithmetics and symbolic reasoning, difficult system-2 tasks that do not follow the standard scaling laws for LLMs. While these successes are often attributed to LLMs' ability for few-shot learning, we show that LLMs are decent zero-shot reasoners by simply adding "Let's think step by step" before each answer. Experimental results demonstrate that our Zero-shot-CoT, using the same single prompt template, significantly outperforms zero-shot LLM performances on diverse benchmark reasoning tasks including arithmetics (MultiArith, GSM8K, AQUA-RAT, SVAMP), symbolic reasoning (Last Letter, Coin Flip), and other logical reasoning tasks (Date Understanding, Tracking Shuffled Objects), without any hand-crafted few-shot examples, e.g. increasing the accuracy on MultiArith from 17.7% to 78.7% and GSM8K from 10.4% to 40.7% with large InstructGPT model (text-davinci-002), as well as similar magnitudes of improvements with another off-the-shelf large model, 540B parameter PaLM. The versatility of this single prompt across very diverse reasoning tasks hints at untapped and understudied fundamental zero-shot capabilities of LLMs, suggesting high-level, multi-task broad cognitive capabilities may be extracted by simple prompting. We hope our work not only serves as the minimal strongest zero-shot baseline for the challenging reasoning benchmarks, but also highlights the importance of carefully exploring and analyzing the enormous zero-shot knowledge hidden inside LLMs before crafting finetuning datasets or few-shot exemplars.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
芽茶完成签到,获得积分10
刚刚
田様应助punctuation采纳,获得10
刚刚
zhou完成签到,获得积分10
2秒前
2秒前
土豆大王发布了新的文献求助10
2秒前
3秒前
单薄怜寒完成签到 ,获得积分10
5秒前
6秒前
Unicode完成签到,获得积分10
6秒前
7秒前
的的完成签到,获得积分10
7秒前
斯文败类应助芽茶采纳,获得10
7秒前
7秒前
8秒前
科研通AI5应助土豆大王采纳,获得10
8秒前
9秒前
coolkid完成签到 ,获得积分10
10秒前
咩咩完成签到,获得积分10
10秒前
可爱的函函应助qiannnn采纳,获得10
11秒前
11秒前
11秒前
11秒前
的的发布了新的文献求助10
12秒前
WWWUBING发布了新的文献求助10
12秒前
xiaoxue发布了新的文献求助10
12秒前
12秒前
土豆大王完成签到,获得积分10
12秒前
今后应助溶脂采纳,获得10
12秒前
会飞的猪发布了新的文献求助30
12秒前
13秒前
孙鹏程完成签到,获得积分10
13秒前
传奇3应助GarrickO采纳,获得10
13秒前
KinKrit完成签到 ,获得积分10
13秒前
camellia完成签到,获得积分10
15秒前
land000完成签到,获得积分10
15秒前
tleeny发布了新的文献求助10
15秒前
16秒前
明天完成签到,获得积分20
16秒前
111发布了新的文献求助10
17秒前
66发布了新的文献求助10
17秒前
高分求助中
Les Mantodea de Guyane Insecta, Polyneoptera 2500
Mobilization, center-periphery structures and nation-building 600
Introduction to Strong Mixing Conditions Volumes 1-3 500
Technologies supporting mass customization of apparel: A pilot project 450
China—Art—Modernity: A Critical Introduction to Chinese Visual Expression from the Beginning of the Twentieth Century to the Present Day 430
Multichannel rotary joints-How they work 400
Tip60 complex regulates eggshell formation and oviposition in the white-backed planthopper, providing effective targets for pest control 400
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 物理 生物化学 纳米技术 计算机科学 化学工程 内科学 复合材料 物理化学 电极 遗传学 量子力学 基因 冶金 催化作用
热门帖子
关注 科研通微信公众号,转发送积分 3794928
求助须知:如何正确求助?哪些是违规求助? 3339887
关于积分的说明 10297885
捐赠科研通 3056485
什么是DOI,文献DOI怎么找? 1677034
邀请新用户注册赠送积分活动 805104
科研通“疑难数据库(出版商)”最低求助积分说明 762333