计算机科学
现场可编程门阵列
瓶颈
嵌入式系统
卷积神经网络
内存层次结构
高效能源利用
Virtex公司
设计流量
硬件加速
统一内存访问
内存带宽
计算机体系结构
带宽(计算)
计算机硬件
内存管理
并行计算
半导体存储器
人工智能
计算机网络
隐藏物
工程类
电气工程
作者
Maurice Peemen,Arnaud Arindra Adiyoso Setio,Bart Mesman,Henk Corporaal
标识
DOI:10.1109/iccd.2013.6657019
摘要
In the near future, cameras will be used everywhere as flexible sensors for numerous applications.For mobility and privacy reasons, the required image processing should be local on embedded computer platforms with performance requirements and energy constraints.Dedicated acceleration of Convolutional Neural Networks (CNN) can achieve these targets with enough flexibility to perform multiple vision tasks.A challenging problem for the design of efficient accelerators is the limited amount of external memory bandwidth.We show that the effects of the memory bottleneck can be reduced by a flexible memory hierarchy that supports the complex data access patterns in CNN workload.The efficiency of the on-chip memories is maximized by our scheduler that uses tiling to optimize for data locality.Our design flow ensures that on-chip memory size is minimized, which reduces area and energy usage.The design flow is evaluated by a High Level Synthesis implementation on a Virtex 6 FPGA board.Compared to accelerators with standard scratchpad memories the FPGA resources can be reduced up to 13x while maintaining the same performance.Alternatively, when the same amount of FPGA resources is used our accelerators are up to 11x faster.
科研通智能强力驱动
Strongly Powered by AbleSci AI