已入深夜,您辛苦了!由于当前在线用户较少,发布求助请尽量完整地填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!祝你早点完成任务,早点休息,好梦!

Dynamic Contrastive Distillation for Image-Text Retrieval

计算机科学 人工智能 情态动词 公制(单位) 机器学习 任务(项目管理) 蒸馏 延迟(音频) 电信 运营管理 化学 管理 有机化学 高分子化学 经济
作者
Jun Rao,Liang Ding,Shuhan Qi,Meng Fang,Yang Liu,Li Shen,Dacheng Tao
出处
期刊:IEEE Transactions on Multimedia [Institute of Electrical and Electronics Engineers]
卷期号:25: 8383-8395 被引量:26
标识
DOI:10.1109/tmm.2023.3236837
摘要

Although the vision-and-language pretraining (VLP) equipped cross-modal image-text retrieval (ITR) has achieved remarkable progress in the past two years, it suffers from a major drawback: the ever-increasing size of VLP models restrict its deployment to real-world search scenarios (where the high latency is unacceptable). To alleviate this problem, we present a novel plug-in dynamic contrastive distillation (DCD) framework to compress the large VLP models for the ITR task. Technically, we face the following two challenges: 1) the typical uni-modal metric learning approach is difficult to directly apply to cross-modal task, due to the limited GPU memory to optimize too many negative samples during handling cross-modal fusion features. 2) it is inefficient to static optimize the student network from different hard samples, which have different effects on distillation learning and student network optimization. We try to overcome these challenges from two points. First, to achieve multi-modal contrastive learning, and balance the training costs and effects, we propose to use a teacher network to estimate the difficult samples for students, making the students absorb the powerful knowledge from pre-trained teachers, and master the knowledge from hard samples. Second, to dynamic learn from hard sample pairs, we propose dynamic distillation to dynamically learn samples of different difficulties, from the perspective of better balancing the difficulty of knowledge and students' self-learning ability. We successfully apply our proposed DCD strategy on two state-of-the-art vision-language pretrained models, i.e. ViLT and METER. Extensive experiments on MS-COCO and Flickr 30 K benchmarks show the effectiveness and efficiency of our DCD framework. Encouragingly, we can speed up the inference at least 129 × compared to the existing ITR models. We further provide in-depth analyses and discussions that explain where the performance improvement comes from. We hope our work can shed light on other tasks that require distillation and contrastive learning.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
领导范儿应助无心的无施采纳,获得10
刚刚
刚刚
benbenx完成签到 ,获得积分10
1秒前
应万言完成签到,获得积分0
3秒前
ANG发布了新的文献求助100
3秒前
隐形曼青应助zzc采纳,获得10
3秒前
kkk发布了新的文献求助10
3秒前
5秒前
6秒前
烟花应助accept采纳,获得10
7秒前
9秒前
幸福大白发布了新的文献求助10
9秒前
9秒前
11秒前
leo完成签到 ,获得积分10
12秒前
12秒前
12秒前
隐形曼青应助morena采纳,获得10
12秒前
13秒前
sily科研发布了新的文献求助30
13秒前
14秒前
longtu发布了新的文献求助10
15秒前
lym发布了新的文献求助10
15秒前
zoosunny完成签到,获得积分10
17秒前
17秒前
杜大帅发布了新的文献求助10
18秒前
leo发布了新的文献求助10
19秒前
accept发布了新的文献求助10
19秒前
20秒前
Andrea0899发布了新的文献求助10
20秒前
Orange应助无心的无施采纳,获得10
20秒前
希望天下0贩的0应助zhangzf采纳,获得10
22秒前
SWL完成签到 ,获得积分10
22秒前
xxxksk发布了新的文献求助10
24秒前
bkagyin应助lym采纳,获得10
25秒前
量子星尘发布了新的文献求助10
25秒前
狂野语山完成签到 ,获得积分10
26秒前
27秒前
木子完成签到,获得积分10
28秒前
科研通AI6应助苗小天采纳,获得10
31秒前
高分求助中
(禁止应助)【重要!!请各位详细阅读】【科研通的精品贴汇总】 10000
Plutonium Handbook 4000
International Code of Nomenclature for algae, fungi, and plants (Madrid Code) (Regnum Vegetabile) 1500
Functional High Entropy Alloys and Compounds 1000
Building Quantum Computers 1000
Molecular Cloning: A Laboratory Manual (Fourth Edition) 500
Social Epistemology: The Niches for Knowledge and Ignorance 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 冶金 细胞生物学 免疫学
热门帖子
关注 科研通微信公众号,转发送积分 4228270
求助须知:如何正确求助?哪些是违规求助? 3761316
关于积分的说明 11822624
捐赠科研通 3422297
什么是DOI,文献DOI怎么找? 1878079
邀请新用户注册赠送积分活动 931231
科研通“疑难数据库(出版商)”最低求助积分说明 839089