LLaVA-MoD: Making LLaVA Tiny via MoE Knowledge Distillation

国防部 蒸馏 数学 计算机科学 色谱法 组合数学 化学
作者
Fangxun Shu,Yue Liao,Le Zhuo,Chenning Xu,G. X. Zhang,Haonan Shi,Long Chen,Tao Zhong,W. He,Siming Fu,Haoyuan Li,Bolin Li,Zhelun Yu,Si Liu,Hongsheng Li,Hao Jiang
出处
期刊:Cornell University - arXiv 被引量:1
标识
DOI:10.48550/arxiv.2408.15881
摘要

We introduce LLaVA-MoD, a novel framework designed to enable the efficient training of small-scale Multimodal Language Models (s-MLLM) by distilling knowledge from large-scale MLLM (l-MLLM). Our approach tackles two fundamental challenges in MLLM distillation. First, we optimize the network structure of s-MLLM by integrating a sparse Mixture of Experts (MoE) architecture into the language model, striking a balance between computational efficiency and model expressiveness. Second, we propose a progressive knowledge transfer strategy to ensure comprehensive knowledge migration. This strategy begins with mimic distillation, where we minimize the Kullback-Leibler (KL) divergence between output distributions to enable the student model to emulate the teacher network's understanding. Following this, we introduce preference distillation via Direct Preference Optimization (DPO), where the key lies in treating l-MLLM as the reference model. During this phase, the s-MLLM's ability to discriminate between superior and inferior examples is significantly enhanced beyond l-MLLM, leading to a better student that surpasses its teacher, particularly in hallucination benchmarks. Extensive experiments demonstrate that LLaVA-MoD outperforms existing models across various multimodal benchmarks while maintaining a minimal number of activated parameters and low computational costs. Remarkably, LLaVA-MoD, with only 2B activated parameters, surpasses Qwen-VL-Chat-7B by an average of 8.8% across benchmarks, using merely 0.3% of the training data and 23% trainable parameters. These results underscore LLaVA-MoD's ability to effectively distill comprehensive knowledge from its teacher model, paving the way for the development of more efficient MLLMs. The code will be available on: https://github.com/shufangxun/LLaVA-MoD.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
赘婿应助和谐的难胜采纳,获得10
刚刚
1秒前
笑点低不关注了科研通微信公众号
1秒前
Lcd发布了新的文献求助10
4秒前
4秒前
ChenDeng发布了新的文献求助10
4秒前
5秒前
wz发布了新的文献求助10
5秒前
章鱼关注了科研通微信公众号
5秒前
踏实香氛完成签到 ,获得积分10
5秒前
喜羊羊完成签到,获得积分10
5秒前
kuankuan发布了新的文献求助10
6秒前
王一发布了新的文献求助10
6秒前
海纳百川完成签到,获得积分10
8秒前
情怀应助虚幻的三问采纳,获得10
8秒前
9秒前
我是老大应助sunny采纳,获得10
9秒前
11秒前
rrjl发布了新的文献求助10
11秒前
11秒前
梁雪珂发布了新的文献求助20
12秒前
Owen应助夏郁采纳,获得10
13秒前
13秒前
Orange应助jixiaoran采纳,获得80
13秒前
郭松发布了新的文献求助10
14秒前
笑点低不发布了新的文献求助10
14秒前
15秒前
pupil完成签到,获得积分10
15秒前
饼饼发布了新的文献求助10
16秒前
16秒前
鳗鱼汽车完成签到,获得积分10
17秒前
Ava应助啦啦啦采纳,获得10
17秒前
ctrl_v发布了新的文献求助10
17秒前
SciGPT应助vagabond采纳,获得10
17秒前
17秒前
轻松的芯完成签到 ,获得积分0
19秒前
星星虫发布了新的文献求助10
19秒前
pupil发布了新的文献求助10
20秒前
科研通AI6.3应助1234采纳,获得10
20秒前
神勇的画笔关注了科研通微信公众号
21秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Picture this! Including first nations fiction picture books in school library collections 2000
The Cambridge History of China: Volume 4, Sui and T'ang China, 589–906 AD, Part Two 1500
Cowries - A Guide to the Gastropod Family Cypraeidae 1200
Quality by Design - An Indispensable Approach to Accelerate Biopharmaceutical Product Development 800
Signals, Systems, and Signal Processing 610
The Oxford Handbook of Archaeology and Language 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 纳米技术 工程类 有机化学 化学工程 生物化学 计算机科学 物理 内科学 复合材料 催化作用 物理化学 光电子学 电极 细胞生物学 基因 无机化学
热门帖子
关注 科研通微信公众号,转发送积分 6393851
求助须知:如何正确求助?哪些是违规求助? 8208867
关于积分的说明 17380050
捐赠科研通 5446926
什么是DOI,文献DOI怎么找? 2879771
邀请新用户注册赠送积分活动 1856213
关于科研通互助平台的介绍 1698963