操作数
乘数(经济学)
乘法(音乐)
卷积神经网络
计算机科学
计算
加法器
算术
失败
算法
人工神经网络
截断(统计)
数学
并行计算
人工智能
机器学习
组合数学
电信
延迟(音频)
宏观经济学
经济
作者
Min Soo Kim,Alberto A. Del Barrio,Leonardo Tavares Oliveira,R. Hermida,Nader Bagherzadeh
标识
DOI:10.1109/tc.2018.2880742
摘要
This paper proposes energy-efficient approximate multipliers based on the Mitchell’s log multiplication, optimized for performing inferences on convolutional neural networks (CNN). Various design techniques are applied to the log multiplier, including a fully-parallel LOD, efficient shift amount calculation, and exact zero computation. Additionally, the truncation of the operands is studied to create the customizable log multiplier that further reduces energy consumption. The paper also proposes using the one’s complements to handle negative numbers, as an approximation of the two’s complements that had been used in the prior works. The viability of the proposed designs is supported by the detailed formal analysis as well as the experimental results on CNNs. The experiments also provide insights into the effect of approximate multiplication in CNNs, identifying the importance of minimizing the range of error.The proposed customizable design at $w$w = 8 saves up to 88 percent energy compared to the exact fixed-point multiplier at 32 bits with just a performance degradation of 0.2 percent for the ImageNet ILSVRC2012 dataset.
科研通智能强力驱动
Strongly Powered by AbleSci AI