情感(语言学)
变压器
任务(项目管理)
功能(生物学)
计算机科学
心理学
认知心理学
工程类
沟通
生物
电气工程
电压
系统工程
细胞生物学
作者
Harmon Bhasin,Timothy Ossowski,Yongwang Zhong,Junjie Hu
出处
期刊:Cornell University - arXiv
日期:2024-04-04
标识
DOI:10.48550/arxiv.2404.03558
摘要
Large language models (LLM) have recently shown the extraordinary ability to perform unseen tasks based on few-shot examples provided as text, also known as in-context learning (ICL). While recent works have attempted to understand the mechanisms driving ICL, few have explored training strategies that incentivize these models to generalize to multiple tasks. Multi-task learning (MTL) for generalist models is a promising direction that offers transfer learning potential, enabling large parameterized models to be trained from simpler, related tasks. In this work, we investigate the combination of MTL with ICL to build models that efficiently learn tasks while being robust to out-of-distribution examples. We propose several effective curriculum learning strategies that allow ICL models to achieve higher data efficiency and more stable convergence. Our experiments reveal that ICL models can effectively learn difficult tasks by training on progressively harder tasks while mixing in prior tasks, denoted as mixed curriculum in this work. Our code and models are available at https://github.com/harmonbhasin/curriculum_learning_icl .
科研通智能强力驱动
Strongly Powered by AbleSci AI