乘数(经济学)
加法器
计算
对数
分数(化学)
计算机科学
算术
人工神经网络
深度学习
数学
算法
数学优化
人工智能
延迟(音频)
数学分析
宏观经济学
电信
经济
有机化学
化学
标识
DOI:10.1109/jetcas.2022.3231642
摘要
Posit numeric format is getting more and more attention in recent years. Its tapered precision makes it especially suitable in many applications including deep learning computation. However, due to its dynamic component bit-width, the cost of implementing posit arithmetic in hardware is more expensive than its floating-point counterpart. To solve this cost problem, in this paper, posit multipliers with approximate computing features are proposed. The core idea of the proposed design is to truncate the fraction multiplier according to the estimated fraction bit-width of the product. So that the resource consumption of the fraction multiplier and thus the fraction adder can be significantly reduced. The proposed method is applied in both linear domain and logarithm domain posit multipliers. The 8/16/32-bit version of the proposed approximate posit multipliers are implemented and analyzed. For the commonly used 16-bit posit format in deep learning computation, the proposed approximate posit multiplier can consume 16% less power compared to the conventional posit multiplier design. The proposed 16-bit approximate logarithm multiplier can achieve a 15% improvement in terms of power consumption compared to the state-of-the-art posit approximate logarithm multiplier. The proposed 16-bit approximate posit multipliers are applied in the computation of several deep neural network models and significant improvements on energy efficiency can be achieved with negligible accuracy degradation.
科研通智能强力驱动
Strongly Powered by AbleSci AI