计算机科学
计算
氙气
并行计算
加速
变压器
冗余(工程)
超级计算机
专用集成电路
至强融核
算法
计算机硬件
操作系统
物理
量子力学
电压
作者
Xinyu Liu,Houwen Peng,Ningxin Zheng,Yuqing Yang,Han Huang,Yixuan Yuan
标识
DOI:10.1109/cvpr52729.2023.01386
摘要
Vision transformers have shown great success due to their high model capabilities. However, their remarkable performance is accompanied by heavy computation costs, which makes them unsuitable for real-time applications. In this paper, we propose a family of high-speed vision transformers named Efficient ViT. We find that the speed of existing transformer models is commonly bounded by memory inefficient operations, especially the tensor reshaping and element-wise functions in MHSA. Therefore, we design a new building block with a sandwich layout, i.e., using a single memory-bound MHSA between efficient FFN layers, which improves memory efficiency while enhancing channel communication. Moreover, we discover that the attention maps share high similarities across heads, leading to computational redundancy. To address this, we present a cascaded group attention module feeding attention heads with different splits of the full feature, which not only saves computation cost but also improves attention diversity. Comprehensive experiments demonstrate EfficientViT outperforms existing efficient models, striking a good trade-off between speed and accuracy. For instance, our EfficientViT-M5 surpasses MobileNetV3-Large by 1.9% in accuracy, while getting 40.4% and 45.2% higher throughput on Nvidia V100 GPU and Intel Xeon CPU, respectively. Compared to the recent efficient model MobileViT-XXS, EfficientViT-M2 achieves 1.8% superior accuracy, while running $5.8\times/3.7\times$ faster on the GPU/CPU, and $7.4\times faster$ when converted to ONNX format. Code and models are available at here.
科研通智能强力驱动
Strongly Powered by AbleSci AI