Contrastive Learning Rivals Masked Image Modeling in Fine-tuning via Feature Distillation

计算机科学 特征(语言学) 人工智能 概括性 集合(抽象数据类型) 可扩展性 光学(聚焦) 分割 微调 图像(数学) 模式识别(心理学) 编码(集合论) 机器学习 心理学 哲学 语言学 物理 量子力学 数据库 光学 心理治疗师 程序设计语言
作者
Yixuan Wei,Han Huang,Zhenda Xie,Zheng Zhang,Yong Cao,Jianmin Bao,Dong Chen,Baining Guo
出处
期刊:Cornell University - arXiv 被引量:4
标识
DOI:10.48550/arxiv.2205.14141
摘要

Masked image modeling (MIM) learns representations with remarkably good fine-tuning performances, overshadowing previous prevalent pre-training approaches such as image classification, instance contrastive learning, and image-text alignment. In this paper, we show that the inferior fine-tuning performance of these pre-training approaches can be significantly improved by a simple post-processing in the form of feature distillation (FD). The feature distillation converts the old representations to new representations that have a few desirable properties just like those representations produced by MIM. These properties, which we aggregately refer to as optimization friendliness, are identified and analyzed by a set of attention- and optimization-related diagnosis tools. With these properties, the new representations show strong fine-tuning performance. Specifically, the contrastive self-supervised learning methods are made as competitive in fine-tuning as the state-of-the-art masked image modeling (MIM) algorithms. The CLIP models' fine-tuning performance is also significantly improved, with a CLIP ViT-L model reaching 89.0% top-1 accuracy on ImageNet-1K classification. On the 3-billion-parameter SwinV2-G model, the fine-tuning accuracy is improved by +1.5 mIoU / +1.1 mAP to 61.4 mIoU / 64.2 mAP on ADE20K semantic segmentation and COCO object detection, respectively, creating new records on both benchmarks. More importantly, our work provides a way for the future research to focus more effort on the generality and scalability of the learnt representations without being pre-occupied with optimization friendliness since it can be enhanced rather easily. The code will be available at https://github.com/SwinTransformer/Feature-Distillation.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
阮逸君发布了新的文献求助10
1秒前
账户已注销完成签到,获得积分0
1秒前
qixingbao07126完成签到,获得积分10
1秒前
2秒前
火星上凌青完成签到,获得积分10
2秒前
bao完成签到,获得积分10
2秒前
3秒前
Ddddddd完成签到,获得积分10
3秒前
CAOHOU应助Queena采纳,获得10
3秒前
奋斗的大米完成签到,获得积分10
3秒前
CAOHOU应助Queena采纳,获得10
3秒前
3秒前
lcj完成签到,获得积分10
4秒前
爆米花应助蒹葭苍苍采纳,获得10
4秒前
SYLH应助xzy998采纳,获得20
4秒前
4秒前
搜集达人应助CLF采纳,获得10
4秒前
小蛮样完成签到,获得积分10
4秒前
4秒前
穆思柔完成签到,获得积分10
4秒前
5秒前
Miranda完成签到,获得积分10
5秒前
sunyz完成签到,获得积分0
5秒前
Maxpan发布了新的文献求助10
6秒前
ccc完成签到,获得积分10
6秒前
亗sui完成签到,获得积分10
6秒前
苹果洋葱完成签到,获得积分10
6秒前
6秒前
33完成签到,获得积分10
6秒前
常涑完成签到,获得积分10
7秒前
彭于晏应助A0采纳,获得10
7秒前
7秒前
梦追阳完成签到 ,获得积分10
7秒前
坐等时光看轻自己完成签到,获得积分10
7秒前
zjm完成签到,获得积分10
8秒前
咿咿呀呀完成签到,获得积分10
8秒前
SGQT完成签到,获得积分10
9秒前
LCC发布了新的文献求助10
9秒前
小阿飞完成签到,获得积分10
9秒前
水牛发布了新的文献求助10
9秒前
高分求助中
【提示信息,请勿应助】关于scihub 10000
A new approach to the extrapolation of accelerated life test data 1000
徐淮辽南地区新元古代叠层石及生物地层 500
Coking simulation aids on-stream time 450
北师大毕业论文 基于可调谐半导体激光吸收光谱技术泄漏气体检测系统的研究 390
Phylogenetic study of the order Polydesmida (Myriapoda: Diplopoda) 370
Robot-supported joining of reinforcement textiles with one-sided sewing heads 360
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 冶金 细胞生物学 免疫学
热门帖子
关注 科研通微信公众号,转发送积分 4016068
求助须知:如何正确求助?哪些是违规求助? 3556043
关于积分的说明 11319836
捐赠科研通 3289063
什么是DOI,文献DOI怎么找? 1812373
邀请新用户注册赠送积分活动 887923
科研通“疑难数据库(出版商)”最低求助积分说明 812044