Depth-Wise Convolutions in Vision Transformers for Efficient Training on Small Datasets

变压器 培训(气象学) 计算机科学 人工智能 计算机视觉 模式识别(心理学) 工程类 地理 电气工程 电压 气象学
作者
Tianxiao Zhang,Wenju Xu,Bo Luo,Guanghui Wang
出处
期刊:Cornell University - arXiv 被引量:1
标识
DOI:10.48550/arxiv.2407.19394
摘要

The Vision Transformer (ViT) leverages the Transformer's encoder to capture global information by dividing images into patches and achieves superior performance across various computer vision tasks. However, the self-attention mechanism of ViT captures the global context from the outset, overlooking the inherent relationships between neighboring pixels in images or videos. Transformers mainly focus on global information while ignoring the fine-grained local details. Consequently, ViT lacks inductive bias during image or video dataset training. In contrast, convolutional neural networks (CNNs), with their reliance on local filters, possess an inherent inductive bias, making them more efficient and quicker to converge than ViT with less data. In this paper, we present a lightweight Depth-Wise Convolution module as a shortcut in ViT models, bypassing entire Transformer blocks to ensure the models capture both local and global information with minimal overhead. Additionally, we introduce two architecture variants, allowing the Depth-Wise Convolution modules to be applied to multiple Transformer blocks for parameter savings, and incorporating independent parallel Depth-Wise Convolution modules with different kernels to enhance the acquisition of local information. The proposed approach significantly boosts the performance of ViT models on image classification, object detection and instance segmentation by a large margin, especially on small datasets, as evaluated on CIFAR-10, CIFAR-100, Tiny-ImageNet and ImageNet for image classification, and COCO for object detection and instance segmentation. The source code can be accessed at https://github.com/ZTX-100/Efficient_ViT_with_DW.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
liu完成签到 ,获得积分10
刚刚
小丑岩发布了新的文献求助10
1秒前
song完成签到 ,获得积分10
3秒前
huafornol发布了新的文献求助10
5秒前
飞翔完成签到,获得积分10
6秒前
wxnice完成签到,获得积分10
7秒前
10秒前
咎青文完成签到,获得积分10
13秒前
勤奋的灯完成签到 ,获得积分10
19秒前
ZZzz完成签到 ,获得积分10
23秒前
威武谷南完成签到,获得积分10
24秒前
幸福妙柏完成签到 ,获得积分10
24秒前
个性破茧完成签到 ,获得积分10
25秒前
科科通通完成签到,获得积分10
28秒前
风中的向卉完成签到 ,获得积分10
28秒前
小丑岩完成签到,获得积分10
32秒前
33秒前
dd完成签到,获得积分10
34秒前
fatcat完成签到,获得积分10
36秒前
共享精神应助huafornol采纳,获得10
38秒前
cdercder应助钱念波采纳,获得10
39秒前
41秒前
SCI的芷蝶完成签到 ,获得积分10
45秒前
strama完成签到,获得积分10
45秒前
dldldl完成签到,获得积分10
49秒前
MQQ完成签到 ,获得积分10
54秒前
chenjiaye完成签到 ,获得积分10
55秒前
shanmao完成签到 ,获得积分10
56秒前
喜悦宫苴完成签到,获得积分10
58秒前
搜集达人应助lelele采纳,获得10
59秒前
cdercder应助科研通管家采纳,获得10
1分钟前
情怀应助科研通管家采纳,获得10
1分钟前
1分钟前
Manzia完成签到,获得积分10
1分钟前
Boris完成签到 ,获得积分10
1分钟前
树池完成签到,获得积分10
1分钟前
lelele完成签到,获得积分10
1分钟前
1分钟前
caibaozi完成签到,获得积分10
1分钟前
真实的采白完成签到 ,获得积分10
1分钟前
高分求助中
Thinking Small and Large 500
Algorithmic Mathematics in Machine Learning 500
Getting Published in SSCI Journals: 200+ Questions and Answers for Absolute Beginners 300
The Monocyte-to-HDL ratio (MHR) as a prognostic and diagnostic biomarker in Acute Ischemic Stroke: A systematic review with meta-analysis (P9-14.010) 240
Interpretability and Explainability in AI Using Python 200
SPECIAL FEATURES OF THE EXCHANGE INTERACTIONS IN ORTHOFERRITE-ORTHOCHROMITES 200
Null Objects from a Cross-Linguistic and Developmental Perspective 200
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 物理 生物化学 纳米技术 计算机科学 化学工程 内科学 复合材料 物理化学 电极 遗传学 量子力学 基因 冶金 催化作用
热门帖子
关注 科研通微信公众号,转发送积分 3833939
求助须知:如何正确求助?哪些是违规求助? 3376362
关于积分的说明 10492715
捐赠科研通 3095877
什么是DOI,文献DOI怎么找? 1704767
邀请新用户注册赠送积分活动 820104
科研通“疑难数据库(出版商)”最低求助积分说明 771859