计算机视觉
人工智能
计算机科学
变压器
缩放比例
高分辨率
模式识别(心理学)
工程类
电压
数学
电气工程
遥感
几何学
地质学
作者
Ting Yao,Yehao Li,Yingwei Pan,Tao Mei
标识
DOI:10.1109/tpami.2024.3379457
摘要
The hybrid deep models of Vision Transformer (ViT) and Convolution Neural Network (CNN) have emerged as a powerful class of backbones for vision tasks. Scaling up the input resolution of such hybrid backbones naturally strengthes model capacity, but inevitably suffers from heavy computational cost that scales quadratically. Instead, we present a new hybrid backbone with HIgh-Resolution Inputs (namely HIRI-ViT), that upgrades prevalent four-stage ViT to five-stage ViT tailored for high-resolution inputs. HIRI-ViT is built upon the seminal idea of decomposing the typical CNN operations into two parallel CNN branches in a cost-efficient manner. One high-resolution branch directly takes primary high-resolution features as inputs, but uses less convolution operations. The other low-resolution branch first performs down-sampling and then utilizes more convolution operations over such low-resolution features. Experiments on both recognition task (ImageNet-1K dataset) and dense prediction tasks (COCO and ADE20 K datasets) demonstrate the superiority of HIRI-ViT. More remarkably, under comparable computational cost ( ∼ 5.0 GFLOPs), HIRI-ViT achieves to-date the best published Top-1 accuracy of 84.3% on ImageNet with 448×448 inputs, which absolutely improves 83.4% of iFormer-S by 0.9% with 224×224 inputs.
科研通智能强力驱动
Strongly Powered by AbleSci AI