变压器
计算机科学
人工智能
嵌入
分割
计算机视觉
图像分割
安全性令牌
模式识别(心理学)
工程类
电气工程
计算机安全
电压
作者
Wenxiao Wang,Wei Chen,Qibo Qiu,Long Chen,Boxi Wu,Binbin Lin,Xiaofei He,Wei Liu
标识
DOI:10.1109/tpami.2023.3341806
摘要
While features of different scales are perceptually important to visual inputs, existing vision transformers do not yet take advantage of them explicitly. To this end, we first propose a cross-scale vision transformer, CrossFormer. It introduces a cross-scale embedding layer (CEL) and a long-short distance attention (LSDA). On the one hand, CEL blends each token with multiple patches of different scales, providing the self-attention module itself with cross-scale features. On the other hand, LSDA splits the self-attention module into a short-distance one and a long-distance counterpart, which not only reduces the computational burden but also keeps both small-scale and large-scale features in the tokens. Moreover, through experiments on CrossFormer, we observe another two issues that affect vision transformers' performance, i.e., the enlarging self-attention maps and amplitude explosion. Thus, we further propose a progressive group size (PGS) paradigm and an amplitude cooling layer (ACL) to alleviate the two issues, respectively. The CrossFormer incorporating with PGS and ACL is called CrossFormer++. Extensive experiments show that CrossFormer++ outperforms the other vision transformers on image classification, object detection, instance segmentation, and semantic segmentation tasks.
科研通智能强力驱动
Strongly Powered by AbleSci AI