计算机科学
分割
编码器
目标检测
变压器
计算机工程
建筑
人工智能
卷积神经网络
机器学习
量子力学
操作系统
物理
艺术
视觉艺术
电压
作者
Wuyang Chen,Xianzhi Du,Fan Yang,Lucas Beyer,Xiaohua Zhai,Tsung-Yi Lin,Huizhong Chen,Jing Li,Song Xue,Zhangyang Wang,Denny Zhou
标识
DOI:10.1007/978-3-031-20080-9_41
摘要
This work presents a simple vision transformer design as a strong baseline for object localization and instance segmentation tasks. Transformers recently demonstrate competitive performance in image classification. To adopt ViT to object detection and dense prediction tasks, many works inherit the multistage design from convolutional networks and highly customized ViT architectures. Behind this design, the goal is to pursue a better trade-off between computational cost and effective aggregation of multiscale global contexts. However, existing works adopt the multistage architectural design as a black-box solution without a clear understanding of its true benefits. In this paper, we comprehensively study three architecture design choices on ViT – spatial reduction, doubled channels, and multiscale features – and demonstrate that a vanilla ViT architecture can fulfill this goal without handcrafting multiscale features, maintaining the original ViT design philosophy. We further complete a scaling rule to optimize our model’s trade-off on accuracy and computation cost / model size. By leveraging a constant feature resolution and hidden size throughout the encoder blocks, we propose a simple and compact ViT architecture called Universal Vision Transformer (UViT) that achieves strong performance on COCO object detection and instance segmentation benchmark. Our code is available at https://github.com/tensorflow/models/tree/master/official/projects/uvit .
科研通智能强力驱动
Strongly Powered by AbleSci AI