Boosting Multi-Modal Large Language Model With Enhanced Visual Features

作者
Yiwei Ma,Weihuang Lin,Zhibin Wang,Jiayi Ji,Xiaoshuai Sun,Weisi Lin,Rongrong Ji
出处
期刊:IEEE Transactions on Pattern Analysis and Machine Intelligence [Institute of Electrical and Electronics Engineers]
卷期号:PP: 1-16
标识
DOI:10.1109/tpami.2025.3644851
摘要

Recent advancements in computer vision (CV) and large language models (LLMs) have spurred significant interest in multi-modal large language models (MLLMs), which aim to integrate visual and textual modalities for enhanced understanding and generation tasks. While much of the existing research focuses on optimizing projectors and LLMs to improve MLLM performance, a critical question remains underexplored: Has the full potential of visual features in MLLMs been realized? To address this question, we identify two key limitations in current MLLM architectures and propose vMLLM, a vision-enhanced MLLM designed to fully leverage the capabilities of visual features. vMLLM introduces two novel components: the Multi-level Aggregation Module (MAM) and the Intra- and inter-modal Enhancement Module (IEM). The MAM aggregates multi-layer features from the vision encoder, capturing both high-level semantic information and low-level spatial details, thereby enriching the visual representation. The IEM enhances visual features through intra- and inter-modal interactions, effectively suppressing irrelevant information while amplifying task-relevant features, leading to more robust multimodal understanding. We conduct extensive experiments on multiple benchmarks, evaluating vMLLM across diverse settings, including different vision encoders, training dataset scales, and varying sizes of LLMs. Our results demonstrate that vMLLM consistently achieves significant performance improvements, validating its effectiveness in harnessing the potential of visual features. These findings highlight the importance of optimizing visual feature extraction and interaction mechanisms in MLLMs, paving the way for more advanced multimodal AI systems. To promote reproducibility and further research, we have made the code and pre-trained models publicly available on GitHub: https://github.com/xmu-xiaoma666/vMLLM.

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
goodchenlu完成签到 ,获得积分20
1秒前
Zyc完成签到,获得积分10
1秒前
1秒前
量子星尘发布了新的文献求助10
2秒前
tracy发布了新的文献求助10
2秒前
2秒前
高倩完成签到,获得积分20
3秒前
英俊的铭应助积极的人生采纳,获得20
3秒前
3秒前
清爽老九发布了新的文献求助30
5秒前
保持李姓发布了新的文献求助10
5秒前
5秒前
5秒前
量子星尘发布了新的文献求助10
6秒前
无花果应助丘奇采纳,获得10
6秒前
mdalmahadi发布了新的文献求助200
6秒前
7秒前
秦奥洋发布了新的文献求助10
7秒前
8秒前
潺潺流水完成签到,获得积分10
8秒前
英俊的铭应助Zyc采纳,获得10
8秒前
茉莉花发布了新的文献求助10
9秒前
容止发布了新的文献求助10
9秒前
刘骁萱发布了新的文献求助10
9秒前
科研小白完成签到 ,获得积分10
9秒前
小莨发布了新的文献求助10
9秒前
10秒前
谢雨晨发布了新的文献求助30
11秒前
cml发布了新的文献求助10
11秒前
11秒前
海德堡发布了新的文献求助10
12秒前
青年才俊发布了新的文献求助10
13秒前
13秒前
13秒前
14秒前
14秒前
科研通AI2S应助容止采纳,获得10
14秒前
神经蛙发布了新的文献求助10
14秒前
Akim应助海绵宝宝采纳,获得10
14秒前
14秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Introduction to strong mixing conditions volume 1-3 5000
Clinical Microbiology Procedures Handbook, Multi-Volume, 5th Edition 2000
从k到英国情人 1500
Ägyptische Geschichte der 21.–30. Dynastie 1100
„Semitische Wissenschaften“? 1100
Russian Foreign Policy: Change and Continuity 800
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5729907
求助须知:如何正确求助?哪些是违规求助? 5320921
关于积分的说明 15317727
捐赠科研通 4876709
什么是DOI,文献DOI怎么找? 2619565
邀请新用户注册赠送积分活动 1569026
关于科研通互助平台的介绍 1525640