计算机科学
循环神经网络
现场可编程门阵列
硬件加速
计算
并行计算
库达
图形处理单元的通用计算
吞吐量
人工神经网络
人工智能
嵌入式系统
算法
绘图
计算机图形学(图像)
电信
无线
作者
Hyungmin Cho,Jeesoo Lee,Jaejin Lee
标识
DOI:10.1109/tpds.2021.3124125
摘要
GPU-based platforms provide high computation throughput for large mini-batch deep neural network computations. However, a large batch size may not be ideal for some situations, such as aiming at low latency, training on edge/mobile devices, partial retraining for personalization, and having irregular input sequence lengths. GPU performance suffers from low utilization especially for small-batch recurrent neural network (RNN) applications where sequential computations are required. In this article, we propose a hybrid architecture, called FARNN, which combines a GPU and an FPGA to accelerate RNN computation for small batch sizes. After separating RNN computations into GPU-efficient and GPU-inefficient tasks, we design special FPGA computation units that accelerate the GPU-inefficient RNN tasks. FARNN off-loads the GPU-inefficient tasks to the FPGA. We evaluate FARNN with synthetic RNN layers of various configurations on the Xilinx UltraScale+ FPGA and the NVIDIA P100 GPU in addition to evaluating it with real RNN applications. The evaluation result indicates that FARNN outperforms the P100 GPU platform for RNN training by up to 4.2 $\times {}$ with small batch sizes, long input sequences, and many RNN cells per layer.
科研通智能强力驱动
Strongly Powered by AbleSci AI