计算机科学
变压器
计算
推论
人工智能
词汇分析
计算机视觉
算法
工程类
电气工程
电压
作者
Hezheng Lin,Cheng Xing,Xiangyu Wu,Dong Shen
标识
DOI:10.1109/icme52920.2022.9859720
摘要
Since Transformer has found widespread use in NLP, the potential of Transformer in CV has been realized and has inspired many new approaches. However, the computation required for replacing word tokens with image patches for Transformer after the tokenization of the image is vast(e.g., ViT), which bottlenecks model training and inference. In this paper, we propose a new attention mechanism in Transformer termed Cross Attention, which alternates attention inner the image patch instead of the whole image to capture local information and apply attention between image patches which are divided from single-channel feature maps to capture global information. Both operations have less computation than standard self-attention in Transformer. Based on that, we build a hierarchical network called Cross Attention Transformer(CAT) for other vision tasks. Our model achieves 82.8% on ImageNet-1K, and improves the performance of other methods on COCO and ADE20K, illustrating that our network has the potential to serve as general backbones. The code and models are avalible at https://github.com/linhezheng19/CAT.
科研通智能强力驱动
Strongly Powered by AbleSci AI