已入深夜,您辛苦了!由于当前在线用户较少,发布求助请尽量完整地填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!祝你早点完成任务,早点休息,好梦!

Continual Learning of Large Language Models: A Comprehensive Survey

计算机科学 自然语言处理 人工智能
作者
Haizhou Shi,Zihao Xu,Hengyi Wang,Weiyi Qin,Wenyuan Wang,Yibin Wang,Zifeng Wang,Sayna Ebrahimi,Hao Wang
出处
期刊:ACM Computing Surveys [Association for Computing Machinery]
卷期号:58 (5): 1-42 被引量:19
标识
DOI:10.1145/3735633
摘要

The challenge of effectively and efficiently adapting statically pre-trained Large Language Models (LLMs) to ever-evolving data distributions remains predominant. When tailored for specific needs, pre-trained LLMs often suffer from significant performance degradation in previous knowledge domains—a phenomenon known as “catastrophic forgetting” . While extensively studied in the Continual Learning (CL) community, this problem presents new challenges in the context of LLMs. In this survey, we provide a comprehensive overview and detailed discussion of the current research progress on LLMs within the context of CL. Besides the introduction of the preliminary knowledge, this survey is structured into four main sections: we first describe an overview of continually learning LLMs, consisting of two directions of continuity: vertical continuity (or vertical continual learning) , i.e., continual adaptation from general to specific capabilities, and horizontal continuity (or horizontal continual learning) , i.e., continual adaptation across time and domains (Section 3 ). Following vertical continuity, we summarize three stages of learning LLMs in the context of modern CL: Continual Pre-Training (CPT), Domain-Adaptive Pre-training (DAP), and Continual Fine-Tuning (CFT) (Section 4 ). We then provide an overview of evaluation protocols for continual learning with LLMs, along with currently available data sources (Section 5 ). Finally, we discuss intriguing questions related to continual learning for LLMs (Section 6 ). This survey sheds light on the relatively understudied domain of continually pre-training, adapting, and fine-tuning large language models, suggesting the necessity for greater attention from the community. Key areas requiring immediate focus include the development of practical and accessible evaluation benchmarks, along with methodologies specifically designed to counter forgetting and enable knowledge transfer within the evolving landscape of LLM learning paradigms. The full list of articles examined in this survey is available at https://github.com/Wang-ML-Lab/llm-continual-learning-survey.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
1秒前
冷静新烟完成签到,获得积分10
2秒前
Sam发布了新的文献求助10
3秒前
wanci应助MoonByMoon采纳,获得10
6秒前
璐璇完成签到,获得积分10
7秒前
二零二六发布了新的文献求助30
7秒前
yang完成签到,获得积分10
9秒前
博古书生完成签到,获得积分10
10秒前
11完成签到,获得积分10
17秒前
Sam完成签到,获得积分10
18秒前
峥嵘完成签到,获得积分10
18秒前
科研通AI6应助二零二六采纳,获得10
18秒前
nine发布了新的文献求助50
23秒前
26秒前
27秒前
归尘发布了新的文献求助10
29秒前
姚芭蕉完成签到 ,获得积分0
30秒前
32秒前
Marvel发布了新的文献求助30
32秒前
34秒前
38秒前
40秒前
大方寄云完成签到,获得积分10
42秒前
只鱼完成签到 ,获得积分10
43秒前
44秒前
zzz发布了新的文献求助10
46秒前
小石头完成签到 ,获得积分10
46秒前
孤央完成签到 ,获得积分10
46秒前
海阔天空完成签到 ,获得积分10
49秒前
开放从波发布了新的文献求助10
49秒前
沉默是金完成签到,获得积分10
50秒前
田田田田完成签到,获得积分10
51秒前
明轩完成签到,获得积分10
53秒前
buxiangshangxue完成签到 ,获得积分10
54秒前
隐形曼青应助LS-GENIUS采纳,获得10
54秒前
风雪丽人完成签到,获得积分10
56秒前
57秒前
Criminology34应助科研通管家采纳,获得10
57秒前
CodeCraft应助科研通管家采纳,获得10
57秒前
科研通AI6应助科研通管家采纳,获得10
57秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Encyclopedia of Reproduction Third Edition 3000
Comprehensive Methanol Science Production, Applications, and Emerging Technologies 2000
化妆品原料学 1000
Psychology of Self-Regulation 600
1st Edition Sports Rehabilitation and Training Multidisciplinary Perspectives By Richard Moss, Adam Gledhill 600
Red Book: 2024–2027 Report of the Committee on Infectious Diseases 500
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5639380
求助须知:如何正确求助?哪些是违规求助? 4747904
关于积分的说明 15006208
捐赠科研通 4797525
什么是DOI,文献DOI怎么找? 2563511
邀请新用户注册赠送积分活动 1522544
关于科研通互助平台的介绍 1482245