深度学习
计算机科学
水准点(测量)
人工神经网络
人工智能
深层神经网络
任务(项目管理)
图像(数学)
特征(语言学)
GSM演进的增强数据速率
软件部署
计算机工程
模式识别(心理学)
工程类
语言学
哲学
大地测量学
系统工程
地理
操作系统
作者
Wong, Alexander,Famouri, Mahmoud,Shafiee, Mohammad Javad
出处
期刊:Cornell University - arXiv
日期:2020-09-29
标识
DOI:10.48550/arxiv.2009.14385
摘要
While significant advances in deep learning has resulted in state-of-the-art performance across a large number of complex visual perception tasks, the widespread deployment of deep neural networks for TinyML applications involving on-device, low-power image recognition remains a big challenge given the complexity of deep neural networks. In this study, we introduce AttendNets, low-precision, highly compact deep neural networks tailored for on-device image recognition. More specifically, AttendNets possess deep self-attention architectures based on visual attention condensers, which extends on the recently introduced stand-alone attention condensers to improve spatial-channel selective attention. Furthermore, AttendNets have unique machine-designed macroarchitecture and microarchitecture designs achieved via a machine-driven design exploration strategy. Experimental results on ImageNet$_{50}$ benchmark dataset for the task of on-device image recognition showed that AttendNets have significantly lower architectural and computational complexity when compared to several deep neural networks in research literature designed for efficiency while achieving highest accuracies (with the smallest AttendNet achieving $\sim$7.2% higher accuracy, while requiring $\sim$3$\times$ fewer multiply-add operations, $\sim$4.17$\times$ fewer parameters, and $\sim$16.7$\times$ lower weight memory requirements than MobileNet-V1). Based on these promising results, AttendNets illustrate the effectiveness of visual attention condensers as building blocks for enabling various on-device visual perception tasks for TinyML applications.
科研通智能强力驱动
Strongly Powered by AbleSci AI