计算机科学
安全性令牌
分割
推论
变压器
移动设备
建筑
延迟(音频)
人工智能
棱锥(几何)
编码(集合论)
计算机视觉
模式识别(心理学)
计算机网络
程序设计语言
工程类
电信
数学
电气工程
视觉艺术
艺术
集合(抽象数据类型)
电压
几何学
操作系统
作者
Wenqiang Zhang,Zilong Huang,Guozhong Luo,Tao Chen,Xinggang Wang,Wenyu Liu,Gang Yu,Chunhua Shen
出处
期刊:Cornell University - arXiv
日期:2022-01-01
标识
DOI:10.48550/arxiv.2204.05525
摘要
Although vision transformers (ViTs) have achieved great success in computer vision, the heavy computational cost hampers their applications to dense prediction tasks such as semantic segmentation on mobile devices. In this paper, we present a mobile-friendly architecture named \textbf{To}ken \textbf{P}yramid Vision Trans\textbf{former} (\textbf{TopFormer}). The proposed \textbf{TopFormer} takes Tokens from various scales as input to produce scale-aware semantic features, which are then injected into the corresponding tokens to augment the representation. Experimental results demonstrate that our method significantly outperforms CNN- and ViT-based networks across several semantic segmentation datasets and achieves a good trade-off between accuracy and latency. On the ADE20K dataset, TopFormer achieves 5\% higher accuracy in mIoU than MobileNetV3 with lower latency on an ARM-based mobile device. Furthermore, the tiny version of TopFormer achieves real-time inference on an ARM-based mobile device with competitive results. The code and models are available at: https://github.com/hustvl/TopFormer
科研通智能强力驱动
Strongly Powered by AbleSci AI