计算机科学
现场可编程门阵列
延迟(音频)
卷积神经网络
库达
硬件加速
还原(数学)
加速度
高效能源利用
并行计算
图形处理单元的通用计算
嵌入式系统
计算科学
人工智能
绘图
计算机图形学(图像)
工程类
几何学
电信
电气工程
物理
经典力学
数学
作者
Walther Carballo-Hernández,Maxime Pelcat,François Berry
出处
期刊:Cornell University - arXiv
日期:2021-01-01
被引量:6
标识
DOI:10.48550/arxiv.2102.01343
摘要
Graphics Processing Units (GPUs) are currently the dominating programmable architecture for Deep Learning (DL) accelerators. The adoption of Field Programmable Gate Arrays (FPGAs) in DL accelerators is however getting momentum. In this paper, we demonstrate that Direct Hardware Mapping (DHM) of a Convolutional Neural Network (CNN) on an embedded FPGA substantially outperforms a GPU implementation in terms of energy efficiency and execution time. However, DHM is highly resource intensive and cannot fully substitute the GPU when implementing a state-of-the-art CNN. We thus propose a hybrid FPGA-GPU DL acceleration method and demonstrate that heterogeneous acceleration outperforms GPU acceleration even including communication overheads. Experimental results are conducted on a heterogeneous multi-platform setup embedding an Nvidia(R) Jetson TX2 CPU-GPU board and an Intel(R) Cyclone10GX FPGA board. The SqueezeNet, MobileNetv2, and ShuffleNetv2 mobile-oriented CNNs are experimented. We show that heterogeneous FPG-AGPU acceleration outperforms GPU acceleration for classification inference task over MobileNetv2 (12%-30% energy reduction, 4% to 26% latency reduction), SqueezeNet (21%-28% energy reduction, same latency), and ShuffleNetv2 (25% energy reduction, 21% latency reduction).
科研通智能强力驱动
Strongly Powered by AbleSci AI