计算机科学
水准点(测量)
任务(项目管理)
背景(考古学)
集合(抽象数据类型)
适配器(计算)
人工智能
编码(集合论)
简单(哲学)
机器学习
计算机工程
程序设计语言
计算机硬件
经济
地理
管理
古生物学
哲学
认识论
生物
大地测量学
作者
Haokun Liu,Derek Tam,Abdul Mohammed,Jay Mohta,Tenghao Huang,Mohit Bansal,Colin Raffel
出处
期刊:Cornell University - arXiv
日期:2022-01-01
被引量:144
标识
DOI:10.48550/arxiv.2205.05638
摘要
Few-shot in-context learning (ICL) enables pre-trained language models to perform a previously-unseen task without any gradient-based training by feeding a small number of training examples as part of the input. ICL incurs substantial computational, memory, and storage costs because it involves processing all of the training examples every time a prediction is made. Parameter-efficient fine-tuning (PEFT) (e.g. adapter modules, prompt tuning, sparse update methods, etc.) offers an alternative paradigm where a small set of parameters are trained to enable a model to perform the new task. In this paper, we rigorously compare few-shot ICL and PEFT and demonstrate that the latter offers better accuracy as well as dramatically lower computational costs. Along the way, we introduce a new PEFT method called (IA)$^3$ that scales activations by learned vectors, attaining stronger performance while only introducing a relatively tiny amount of new parameters. We also propose a simple recipe based on the T0 model called T-Few that can be applied to new tasks without task-specific tuning or modifications. We validate the effectiveness of T-Few on completely unseen tasks by applying it to the RAFT benchmark, attaining super-human performance for the first time and outperforming the state-of-the-art by 6% absolute. All of the code used in our experiments is publicly available.
科研通智能强力驱动
Strongly Powered by AbleSci AI