MPMoE: Memory Efficient MoE for Pre-Trained Models With Adaptive Pipeline Parallelism

计算机科学 并行计算 管道(软件) 平行性(语法) 任务并行性 指令级并行 计算机体系结构 操作系统
作者
Zheng Zhang,Yaqi Xia,H. Wang,Donglin Yang,Chuang Hu,Xiaobo Zhou,Dazhao Cheng
出处
期刊:IEEE Transactions on Parallel and Distributed Systems [Institute of Electrical and Electronics Engineers]
卷期号:35 (6): 998-1011 被引量:2
标识
DOI:10.1109/tpds.2024.3385639
摘要

In recent years, the Mixture-of-Experts (MoE) technique has gained widespread popularity as a means to scale pretrained models to exceptionally large sizes. Dynamic activation of experts allows for conditional computation, increasing the number of parameters of neural networks, which is critical for absorbing the vast amounts of knowledge available in many deep learning areas. However, despite the existing system and algorithm optimizations, there are significant challenges to be tackled when it comes to the inefficiencies of communication and memory consumption. In this paper, we present the design and implementation of MPMoE, a high-performance library that accelerates MoE training with adaptive and memory-efficient pipeline parallelism. Inspired by that the MoE training procedure can be divided into multiple independent sub-stages. We design a pipeline parallelism method for reducing communication latency by overlapping with computation operations. Further, we analyze the memory footprint breakdown of MoE training and identify that activations and temporary buffers are the primary contributors to the overall memory footprint. Toward memory efficiency, we propose memory reuse strategies to reduce memory requirements by eliminating memory redundancies. Finally, to optimize pipeline granularity and memory reuse strategies jointly, we propose a profile-based algorithm and a performance model to determine the configurations of MPMoE at runtime. We implement MPMoE upon PyTorch and evaluate it with common MoE models in two physical clusters, including 64 NVIDIA A100 GPU cards and 16 NVIDIA V100 GPU cards. Compared with the state-of-art approach, MPMoE achieves up to 2.3× speedup while reducing more than 30% memory footprint for training large models.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
无情菀完成签到,获得积分20
刚刚
击倒大树的风暴完成签到,获得积分10
1秒前
斯文败类应助科研通管家采纳,获得10
2秒前
赘婿应助科研通管家采纳,获得10
2秒前
英姑应助科研通管家采纳,获得10
2秒前
orixero应助科研通管家采纳,获得10
2秒前
科研通AI2S应助科研通管家采纳,获得10
2秒前
乐乐应助科研通管家采纳,获得10
2秒前
2秒前
星辰大海应助科研通管家采纳,获得10
2秒前
orixero应助科研通管家采纳,获得10
3秒前
昏睡的蟠桃应助科研通管家采纳,获得200
3秒前
研友_VZG7GZ应助科研通管家采纳,获得10
3秒前
Vivian应助谷遇采纳,获得10
3秒前
星辰大海应助科研通管家采纳,获得30
3秒前
科研通AI5应助科研通管家采纳,获得30
3秒前
CipherSage应助科研通管家采纳,获得10
3秒前
科研通AI5应助科研通管家采纳,获得10
3秒前
bkagyin应助科研通管家采纳,获得10
3秒前
香蕉觅云应助科研通管家采纳,获得10
4秒前
李健应助科研通管家采纳,获得10
4秒前
4秒前
SciGPT应助科研通管家采纳,获得10
4秒前
一一应助科研通管家采纳,获得10
4秒前
科研通AI5应助科研通管家采纳,获得10
4秒前
小蘑菇应助科研通管家采纳,获得10
4秒前
一一应助科研通管家采纳,获得10
4秒前
一一应助科研通管家采纳,获得10
4秒前
4秒前
搜集达人应助ttl采纳,获得10
4秒前
4秒前
彭于彦祖应助科研通管家采纳,获得10
4秒前
旧辞发布了新的文献求助10
5秒前
5秒前
快乐的夏山关注了科研通微信公众号
6秒前
6秒前
7秒前
耶?发布了新的文献求助10
9秒前
yc发布了新的文献求助10
10秒前
11秒前
高分求助中
Encyclopedia of Mathematical Physics 2nd edition 888
Introduction to Strong Mixing Conditions Volumes 1-3 500
Tip60 complex regulates eggshell formation and oviposition in the white-backed planthopper, providing effective targets for pest control 400
Optical and electric properties of monocrystalline synthetic diamond irradiated by neutrons 320
共融服務學習指南 300
Essentials of Pharmacoeconomics: Health Economics and Outcomes Research 3rd Edition. by Karen Rascati 300
Peking Blues // Liao San 300
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 物理 生物化学 纳米技术 计算机科学 化学工程 内科学 复合材料 物理化学 电极 遗传学 量子力学 基因 冶金 催化作用
热门帖子
关注 科研通微信公众号,转发送积分 3802119
求助须知:如何正确求助?哪些是违规求助? 3347873
关于积分的说明 10335457
捐赠科研通 3063893
什么是DOI,文献DOI怎么找? 1682232
邀请新用户注册赠送积分活动 807941
科研通“疑难数据库(出版商)”最低求助积分说明 763973