CAS-ViT: Convolutional Additive Self-attention Vision Transformers for Efficient Mobile Applications

变压器 计算机科学 人工智能 电气工程 工程类 电压
作者
Tianfang Zhang,Lei Li,Yang Zhou,Wentao Liu,Chen Qian,Xiangyang Ji
出处
期刊:Cornell University - arXiv 被引量:13
标识
DOI:10.48550/arxiv.2408.03703
摘要

Vision Transformers (ViTs) mark a revolutionary advance in neural networks with their token mixer's powerful global context capability. However, the pairwise token affinity and complex matrix operations limit its deployment on resource-constrained scenarios and real-time applications, such as mobile devices, although considerable efforts have been made in previous works. In this paper, we introduce CAS-ViT: Convolutional Additive Self-attention Vision Transformers, to achieve a balance between efficiency and performance in mobile applications. Firstly, we argue that the capability of token mixers to obtain global contextual information hinges on multiple information interactions, such as spatial and channel domains. Subsequently, we propose Convolutional Additive Token Mixer (CATM) employing underlying spatial and channel attention as novel interaction forms. This module eliminates troublesome complex operations such as matrix multiplication and Softmax. We introduce Convolutional Additive Self-attention(CAS) block hybrid architecture and utilize CATM for each block. And further, we build a family of lightweight networks, which can be easily extended to various downstream tasks. Finally, we evaluate CAS-ViT across a variety of vision tasks, including image classification, object detection, instance segmentation, and semantic segmentation. Our M and T model achieves 83.0\%/84.1\% top-1 with only 12M/21M parameters on ImageNet-1K. Meanwhile, throughput evaluations on GPUs, ONNX, and iPhones also demonstrate superior results compared to other state-of-the-art backbones. Extensive experiments demonstrate that our approach achieves a better balance of performance, efficient inference and easy-to-deploy. Our code and model are available at: \url{https://github.com/Tianfang-Zhang/CAS-ViT}

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
清秀的迎彤完成签到,获得积分20
刚刚
刚刚
1秒前
CodeCraft应助yangyang111采纳,获得10
1秒前
2秒前
qzh完成签到,获得积分20
4秒前
4秒前
丘比特应助导师求放过采纳,获得10
4秒前
贾学士发布了新的文献求助10
4秒前
小卢卢快闭嘴完成签到,获得积分10
4秒前
李健的小迷弟应助Jeannie采纳,获得10
5秒前
5秒前
共享精神应助lx采纳,获得30
5秒前
务实小鸽子完成签到 ,获得积分10
5秒前
lililili完成签到,获得积分10
6秒前
西西完成签到,获得积分10
6秒前
Hello应助lll采纳,获得10
6秒前
所所应助旗树树采纳,获得10
6秒前
FF完成签到,获得积分20
7秒前
zydf发布了新的文献求助10
7秒前
CipherSage应助番茄采纳,获得10
7秒前
pzh关闭了pzh文献求助
7秒前
英俊的铭应助DAYTOY采纳,获得10
7秒前
qzh发布了新的文献求助10
8秒前
8秒前
xiaobai完成签到,获得积分10
8秒前
诗槐完成签到,获得积分10
9秒前
Binbin发布了新的文献求助10
10秒前
Ava应助山野有雾都采纳,获得10
10秒前
书双完成签到,获得积分10
10秒前
犯花痴的大叔完成签到,获得积分10
10秒前
shunli顺利发布了新的文献求助10
12秒前
12秒前
林梓峰完成签到,获得积分10
12秒前
隐形曼青应助wdd采纳,获得10
12秒前
13秒前
柚子完成签到,获得积分10
14秒前
15秒前
希望天下0贩的0应助doctorw采纳,获得10
15秒前
xin完成签到 ,获得积分10
15秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
量子光学理论与实验技术 1000
The Social Work Ethics Casebook: Cases and Commentary (revised 2nd ed.). Frederic G. Reamer 800
Beyond the sentence : discourse and sentential form / edited by Jessica R. Wirth 600
Holistic Discourse Analysis 600
Vertébrés continentaux du Crétacé supérieur de Provence (Sud-Est de la France) 600
Vertebrate Palaeontology, 5th Edition 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 纳米技术 计算机科学 内科学 化学工程 复合材料 物理化学 基因 遗传学 催化作用 冶金 量子力学 光电子学
热门帖子
关注 科研通微信公众号,转发送积分 5329293
求助须知:如何正确求助?哪些是违规求助? 4468822
关于积分的说明 13906962
捐赠科研通 4361865
什么是DOI,文献DOI怎么找? 2396049
邀请新用户注册赠送积分活动 1389427
关于科研通互助平台的介绍 1360272