计算机科学
失败
分割
编码器
变压器
建筑
人工智能
卷积神经网络
转置
移动设备
编码(集合论)
卷积(计算机科学)
编码
模式识别(心理学)
计算机工程
人工神经网络
计算机体系结构
并行计算
电压
物理
艺术
基因
视觉艺术
量子力学
特征向量
集合(抽象数据类型)
化学
生物化学
程序设计语言
操作系统
作者
Muhammad Maaz,Abdelrahman Shaker,Hisham Cholakkal,Salman Khan,Syed Waqas Zamir,Rao Muhammad Anwer,Fahad Shahbaz Khan
标识
DOI:10.1007/978-3-031-25082-8_1
摘要
In the pursuit of achieving ever-increasing accuracy, large and complex neural networks are usually developed. Such models demand high computational resources and therefore cannot be deployed on edge devices. It is of great interest to build resource-efficient general purpose networks due to their usefulness in several application areas. In this work, we strive to effectively combine the strengths of both CNN and Transformer models and propose a new efficient hybrid architecture EdgeNeXt. Specifically in EdgeNeXt, we introduce split depth-wise transpose attention (STDA) encoder that splits input tensors into multiple channel groups and utilizes depth-wise convolution along with self-attention across channel dimensions to implicitly increase the receptive field and encode multi-scale features. Our extensive experiments on classification, detection and segmentation tasks, reveal the merits of the proposed approach, outperforming state-of-the-art methods with comparatively lower compute requirements. Our EdgeNeXt model with 1.3M parameters achieves 71.2% top-1 accuracy on ImageNet-1K, outperforming MobileViT with an absolute gain of 2.2% with 28% reduction in FLOPs. Further, our EdgeNeXt model with 5.6M parameters achieves 79.4% top-1 accuracy on ImageNet-1K. The code and models are available at https://t.ly/_Vu9 .
科研通智能强力驱动
Strongly Powered by AbleSci AI