计算机科学
特征(语言学)
人工智能
概括性
集合(抽象数据类型)
可扩展性
光学(聚焦)
分割
微调
图像(数学)
模式识别(心理学)
编码(集合论)
机器学习
心理学
哲学
语言学
物理
量子力学
数据库
光学
心理治疗师
程序设计语言
作者
Yixuan Wei,Han Huang,Zhenda Xie,Zheng Zhang,Yong Cao,Jianmin Bao,Dong Chen,Baining Guo
出处
期刊:Cornell University - arXiv
日期:2022-01-01
被引量:4
标识
DOI:10.48550/arxiv.2205.14141
摘要
Masked image modeling (MIM) learns representations with remarkably good fine-tuning performances, overshadowing previous prevalent pre-training approaches such as image classification, instance contrastive learning, and image-text alignment. In this paper, we show that the inferior fine-tuning performance of these pre-training approaches can be significantly improved by a simple post-processing in the form of feature distillation (FD). The feature distillation converts the old representations to new representations that have a few desirable properties just like those representations produced by MIM. These properties, which we aggregately refer to as optimization friendliness, are identified and analyzed by a set of attention- and optimization-related diagnosis tools. With these properties, the new representations show strong fine-tuning performance. Specifically, the contrastive self-supervised learning methods are made as competitive in fine-tuning as the state-of-the-art masked image modeling (MIM) algorithms. The CLIP models' fine-tuning performance is also significantly improved, with a CLIP ViT-L model reaching 89.0% top-1 accuracy on ImageNet-1K classification. On the 3-billion-parameter SwinV2-G model, the fine-tuning accuracy is improved by +1.5 mIoU / +1.1 mAP to 61.4 mIoU / 64.2 mAP on ADE20K semantic segmentation and COCO object detection, respectively, creating new records on both benchmarks. More importantly, our work provides a way for the future research to focus more effort on the generality and scalability of the learnt representations without being pre-occupied with optimization friendliness since it can be enhanced rather easily. The code will be available at https://github.com/SwinTransformer/Feature-Distillation.
科研通智能强力驱动
Strongly Powered by AbleSci AI