Bridging the Gap Between Vision Transformers and Convolutional Neural Networks on Small Datasets

计算机科学 卷积神经网络 安全性令牌 特征学习 归纳偏置 人工智能 嵌入 桥接(联网) 模式识别(心理学) 频道(广播) 代表(政治) 机器学习 多任务学习 任务(项目管理) 工程类 政治 计算机安全 计算机网络 法学 系统工程 政治学
作者
Zhiying Lu,Hongtao Xie,Chuanbin Liu,Yongdong Zhang
出处
期刊:Cornell University - arXiv 被引量:24
标识
DOI:10.48550/arxiv.2210.05958
摘要

There still remains an extreme performance gap between Vision Transformers (ViTs) and Convolutional Neural Networks (CNNs) when training from scratch on small datasets, which is concluded to the lack of inductive bias. In this paper, we further consider this problem and point out two weaknesses of ViTs in inductive biases, that is, the spatial relevance and diverse channel representation. First, on spatial aspect, objects are locally compact and relevant, thus fine-grained feature needs to be extracted from a token and its neighbors. While the lack of data hinders ViTs to attend the spatial relevance. Second, on channel aspect, representation exhibits diversity on different channels. But the scarce data can not enable ViTs to learn strong enough representation for accurate recognition. To this end, we propose Dynamic Hybrid Vision Transformer (DHVT) as the solution to enhance the two inductive biases. On spatial aspect, we adopt a hybrid structure, in which convolution is integrated into patch embedding and multi-layer perceptron module, forcing the model to capture the token features as well as their neighboring features. On channel aspect, we introduce a dynamic feature aggregation module in MLP and a brand new "head token" design in multi-head self-attention module to help re-calibrate channel representation and make different channel group representation interacts with each other. The fusion of weak channel representation forms a strong enough representation for classification. With this design, we successfully eliminate the performance gap between CNNs and ViTs, and our DHVT achieves a series of state-of-the-art performance with a lightweight model, 85.68% on CIFAR-100 with 22.8M parameters, 82.3% on ImageNet-1K with 24.0M parameters. Code is available at https://github.com/ArieSeirack/DHVT.

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
1秒前
satohoang完成签到,获得积分10
1秒前
迷惘墨香完成签到,获得积分10
1秒前
ts发布了新的文献求助10
1秒前
浏阳河完成签到,获得积分10
4秒前
yalin完成签到,获得积分10
5秒前
脑洞疼应助赖皮蛇采纳,获得10
5秒前
caiE完成签到,获得积分10
6秒前
6秒前
雨水完成签到,获得积分0
6秒前
Wenqi完成签到,获得积分10
7秒前
maryin发布了新的文献求助10
8秒前
隐形曼青应助科研通管家采纳,获得10
8秒前
JamesPei应助科研通管家采纳,获得10
8秒前
ding应助科研通管家采纳,获得10
8秒前
orixero应助科研通管家采纳,获得10
8秒前
NexusExplorer应助科研通管家采纳,获得10
8秒前
研友_VZG7GZ应助科研通管家采纳,获得10
8秒前
外向的飞雪完成签到,获得积分10
8秒前
研友_VZG7GZ应助大zeizei采纳,获得10
8秒前
无花果应助科研通管家采纳,获得10
8秒前
隐形曼青应助科研通管家采纳,获得10
8秒前
CipherSage应助科研通管家采纳,获得10
9秒前
搜集达人应助科研通管家采纳,获得10
9秒前
9秒前
传奇3应助科研通管家采纳,获得10
9秒前
我是老大应助科研通管家采纳,获得10
9秒前
天天快乐应助科研通管家采纳,获得10
9秒前
dyy完成签到,获得积分10
9秒前
9秒前
9秒前
华仔应助科研通管家采纳,获得10
9秒前
领导范儿应助科研通管家采纳,获得10
9秒前
ts完成签到,获得积分10
10秒前
10秒前
全能CC应助nsdcdcbdv采纳,获得10
10秒前
abner发布了新的文献求助10
11秒前
意面发布了新的文献求助10
11秒前
11秒前
充电宝应助奋斗的海豚采纳,获得10
12秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Les Mantodea de Guyane Insecta, Polyneoptera 2000
Pulse width control of a 3-phase inverter with non sinusoidal phase voltages 777
Signals, Systems, and Signal Processing 610
Research Methods for Applied Linguistics: A Practical Guide 600
Research Methods for Applied Linguistics 500
Chemistry and Physics of Carbon Volume 15 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 纳米技术 工程类 有机化学 化学工程 生物化学 计算机科学 物理 内科学 复合材料 催化作用 物理化学 光电子学 电极 细胞生物学 基因 无机化学
热门帖子
关注 科研通微信公众号,转发送积分 6407149
求助须知:如何正确求助?哪些是违规求助? 8226315
关于积分的说明 17446800
捐赠科研通 5459910
什么是DOI,文献DOI怎么找? 2885195
邀请新用户注册赠送积分活动 1861492
关于科研通互助平台的介绍 1701802