VOLO: Vision Outlooker for Visual Recognition

计算机科学 人工智能 计算 模式识别(心理学) 瓶颈 特征(语言学) 安全性令牌 变压器 算法 计算机安全 语言学 哲学 嵌入式系统 物理 量子力学 电压
作者
Li Yuan,Qibin Hou,Zihang Jiang,Jiashi Feng,Shuicheng Yan
出处
期刊:IEEE Transactions on Pattern Analysis and Machine Intelligence [IEEE Computer Society]
卷期号:: 1-13 被引量:200
标识
DOI:10.1109/tpami.2022.3206108
摘要

Recently, Vision Transformers (ViTs) have been broadly explored in visual recognition. With low efficiency in encoding fine-level features, the performance of ViTs is still inferior to the state-of-the-art CNNs when trained from scratch on a midsize dataset like ImageNet. Through experimental analysis, we find it is because of two reasons: 1) the simple tokenization of input images fails to model the important local structure such as edges and lines, leading to low training sample efficiency; 2) the redundant attention backbone design of ViTs leads to limited feature richness for fixed computation budgets and limited training samples. To overcome such limitations, we present a new simple and generic architecture, termed Vision Outlooker (VOLO), which implements a novel outlook attention operation that dynamically conduct the local feature aggregation mechanism in a sliding window manner across the input image. Unlike self-attention that focuses on modeling global dependencies of local features at a coarse level, our outlook attention targets at encoding finer-level features, which is critical for recognition but ignored by self-attention. Outlook attention breaks the bottleneck of self-attention whose computation cost scales quadratically with the input spatial dimension, and thus is much more memory efficient. Compared to our Tokens-To-Token Vision Transformer (T2T-ViT), VOLO can more efficiently encode fine-level features that are essential for high-performance visual recognition. Experiments show that with only 26.6 M learnable parameters, VOLO achieves 84.2% top-1 accuracy on ImageNet-1 K without using extra training data, 2.7% better than T2T-ViT with a comparable number of parameters. When the model size is scaled up to 296 M parameters, its performance can be further improved to 87.1%, setting a new record for ImageNet-1 K classification. In addition, we also take the proposed VOLO as pretrained models and report superior performance on downstream tasks, such as semantic segmentation. Code is available at https://github.com/sail-sg/volo.

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
小摩尔完成签到 ,获得积分10
刚刚
1秒前
1秒前
SCIER发布了新的文献求助10
2秒前
乐乐应助Viola采纳,获得10
2秒前
2秒前
xx发布了新的文献求助10
2秒前
Lin完成签到,获得积分10
2秒前
Jasper应助JemCC采纳,获得10
2秒前
晚屿完成签到 ,获得积分10
3秒前
一只住在海边的猫应助111采纳,获得30
3秒前
3秒前
wanci应助李某某采纳,获得10
3秒前
科研通AI2S应助狂奔的蜗牛采纳,获得10
3秒前
4秒前
zino发布了新的文献求助10
4秒前
蜂鸟完成签到,获得积分10
5秒前
时不我待C发布了新的文献求助10
6秒前
huang发布了新的文献求助10
6秒前
6秒前
6秒前
科研小白发布了新的文献求助20
7秒前
丘比特应助王瑞采纳,获得10
7秒前
何大青完成签到,获得积分10
7秒前
深情安青应助Shawn采纳,获得10
7秒前
搜集达人应助LXJ采纳,获得20
7秒前
xi发布了新的文献求助10
7秒前
8秒前
wuyirui完成签到 ,获得积分20
8秒前
赘婿应助呱呱采纳,获得10
8秒前
lupeng发布了新的文献求助10
9秒前
9秒前
涂料完善的行者完成签到,获得积分10
9秒前
周杰伦发布了新的文献求助10
10秒前
10秒前
xueerbx发布了新的文献求助50
11秒前
ding应助liquor采纳,获得10
11秒前
领导范儿应助APP采纳,获得10
11秒前
顾矜应助超级蘑菇采纳,获得10
11秒前
绿豆粉腻子膏完成签到,获得积分10
12秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
The Cambridge History of China: Volume 4, Sui and T'ang China, 589–906 AD, Part Two 1500
Cowries - A Guide to the Gastropod Family Cypraeidae 1200
Quality by Design - An Indispensable Approach to Accelerate Biopharmaceutical Product Development 800
Pulse width control of a 3-phase inverter with non sinusoidal phase voltages 777
The Cambridge Handbook of Second Language Acquisition (2nd)[第二版] 666
Signals, Systems, and Signal Processing 610
热门求助领域 (近24小时)
化学 材料科学 医学 生物 纳米技术 工程类 有机化学 化学工程 生物化学 计算机科学 物理 内科学 复合材料 催化作用 物理化学 光电子学 电极 细胞生物学 基因 无机化学
热门帖子
关注 科研通微信公众号,转发送积分 6401480
求助须知:如何正确求助?哪些是违规求助? 8218890
关于积分的说明 17417715
捐赠科研通 5454324
什么是DOI,文献DOI怎么找? 2882526
邀请新用户注册赠送积分活动 1859052
关于科研通互助平台的介绍 1700752