Edge-MoE: Memory-Efficient Multi-Task Vision Transformer Architecture with Task-Level Sparsity via Mixture-of-Experts

计算机科学 变压器 建筑 任务(项目管理) 计算机体系结构 任务分析 人机交互 人工智能 嵌入式系统 工程类 电气工程 系统工程 电压 视觉艺术 艺术
作者
Rishov Sarkar,Hanxue Liang,Zhiwen Fan,Zhangyang Wang,Cong Hao
标识
DOI:10.1109/iccad57390.2023.10323651
摘要

The computer vision community is embracing two promising learning paradigms: the Vision Transformer (ViT) and Multi-task Learning (MTL). ViT models show extraordinary performance over traditional convolution networks but are commonly recognized as computation-intensive, especially the self-attention with quadratic complexity. MTL uses one model to infer multiple tasks with better performance by enforcing shared representation among tasks, but a huge drawback is that, most MTL regimes require activation of the entire model even when only one or a few tasks are needed, causing significant computing waste. M 3 ViT is the latest multi-task Vi $T$ model that introduces mixture-of-experts (MoE), where only a small portion of subnetworks ("experts") are sparsely and dynamically activated based on the current task. M 3 Vi $T$ achieves better accuracy and over 80% computation reduction and paves the way for efficient real-time MTL using ViT. Despite the algorithmic advantages of MTL, ViT, and even M 3 ViT, there are still many challenges for efficient deployment on FPGA. For instance, in general Transformer/ViT models, the self-attention is known as computational intensive and requires high bandwidth. In addition, softmax operations and the activation function GELU are extensively used, which unfortunately can consume more than half of the entire FPGA resource (LUTs). In the M 3 ViT model, the promising MoE mechanism for multi-task exposes new challenges for memory access overhead and also increases resource usage because of more layer types. To address these challenges in both general Transformer/ViT models and the state-of-the-art multi-task M 3 ViT with MoE, we propose Edge-MoE, the first end-to-end FPGA accelerator for multi-task ViT with a rich collection of architectural innovations. First, for general Transformer/ViT models, we propose (1) a novel reordering mechanism for self-attention, which reduces the bandwidth requirement from proportional to constant regardless of the target parallelism; (2) a fast single-pass softmax approximation; (3) an accurate and low-cost GELU approximation, which can significantly reduce the computation latency and resource usage; and (4) a unified and flexible computing unit that can be shared by almost all computational layers to maximally reduce resource usage. Second, for the advanced multi-task M 3 ViT with MoE, we propose a novel patch reordering method to completely eliminate any memory access overhead. Third, we deliver on-board implementation and measurement on Xilinx ZCU102 FPGA, with verified functionality and open-sourced hardware design, which achieves 2.24× and 4.90× better energy efficiency comparing with GPU (A6000) and CPU (Xeon 6226R), respectively. A real-time video demonstration of our accelerated multi-task ViT on an autonomous driving dataset is available in GitHub, 1 1 https://github.com/sharc-lab/Edge-MoE/raw/main/demo.mp4 together with our FPGA design using High-Level Synthesis, host code, FPGA bitstream, and on-board performance results.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
杰克李李完成签到,获得积分10
刚刚
科研通AI6应助FLANKS采纳,获得10
刚刚
擦撒擦擦发布了新的文献求助10
刚刚
善学以致用应助鱼鱼采纳,获得10
1秒前
赘婿应助queer采纳,获得10
1秒前
zyq完成签到,获得积分10
1秒前
559_完成签到,获得积分10
1秒前
titi完成签到,获得积分10
1秒前
加鲁鲁lu完成签到,获得积分10
3秒前
Wnnnn发布了新的文献求助10
3秒前
慕青应助onion采纳,获得10
4秒前
科研通AI6应助震动的戒指采纳,获得10
5秒前
6秒前
詹亚雄完成签到,获得积分10
6秒前
9秒前
科研通AI6应助成就的发箍采纳,获得10
9秒前
10秒前
10秒前
小二郎应助lankeren采纳,获得10
10秒前
10秒前
丘比特应助葉芊羽采纳,获得10
11秒前
壑舟完成签到,获得积分10
11秒前
圆锥香蕉应助尊敬的凝丹采纳,获得20
12秒前
YIXIN完成签到,获得积分10
12秒前
12秒前
流云完成签到,获得积分10
12秒前
13秒前
13秒前
sedrakyan发布了新的文献求助10
14秒前
Sunnig盈完成签到,获得积分10
14秒前
酷酷紫蓝发布了新的文献求助10
14秒前
CAO发布了新的文献求助10
15秒前
16秒前
英俊的铭应助悲伤汉堡包采纳,获得10
16秒前
Jesse发布了新的文献求助10
16秒前
16秒前
ViVi水泥要干喽完成签到 ,获得积分10
17秒前
MY完成签到,获得积分10
17秒前
平淡远山完成签到,获得积分10
18秒前
卷羊发布了新的文献求助10
19秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Encyclopedia of Agriculture and Food Systems Third Edition 2000
Clinical Microbiology Procedures Handbook, Multi-Volume, 5th Edition 临床微生物学程序手册,多卷,第5版 2000
人脑智能与人工智能 1000
King Tyrant 720
Silicon in Organic, Organometallic, and Polymer Chemistry 500
Principles of Plasma Discharges and Materials Processing, 3rd Edition 400
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5601001
求助须知:如何正确求助?哪些是违规求助? 4686544
关于积分的说明 14844858
捐赠科研通 4679334
什么是DOI,文献DOI怎么找? 2539149
邀请新用户注册赠送积分活动 1506013
关于科研通互助平台的介绍 1471253