计算机科学
加法器
架空(工程)
修剪
静态随机存取存储器
MNIST数据库
还原(数学)
数字电子学
计算机工程
电子线路
树(集合论)
算法
并行计算
人工神经网络
计算机硬件
数学
人工智能
工程类
延迟(音频)
数学分析
几何学
生物
电气工程
操作系统
电信
农学
作者
Chaojie He,Zi Wang,Feibin Xiang,Zhuoyu Dai,Yifan He,Jinshan Yue,Yongpan Liu
出处
期刊:IEEE Transactions on Circuits and Systems Ii-express Briefs
[Institute of Electrical and Electronics Engineers]
日期:2023-08-14
卷期号:71 (2): 852-856
被引量:16
标识
DOI:10.1109/tcsii.2023.3304752
摘要
The energy-efficient computing-in-memory (CIM) architectures have drawn much attention as the increasing demands of neural networks. Several SRAM-based CIM architectures adopt a digital implementation, using the digital adder trees (ATs) to perform in-memory multiply-accumulate (MAC) operations. Compared with the analog-domain CIM, the digital CIM eliminates errors caused by analog circuits to achieve high accuracy. However, the digital AT still incurs much power/area overhead. This brief proposes a novel low-power AT solution by sparsity and approximate circuits co-design. Several sparsity modes are explored to perform approximate logic substitution of the full adder. Besides, fine-grain pruning algorithm and offline data rearrangement compensate for the accuracy loss incurred by approximation. The proposed approximation scheme achieves at least a 19.3% reduction in area and a 30.0% reduction in power consumption. The maximum inference accuracy of the LeNet model on MNIST dataset is slightly 0.06% lower than the baseline accuracy. On the retrained Vgg8 and Vgg16 models on Cifar-10 dataset, the proposed three approximation strategies incur at most 0.99% accuracy decreases.
科研通智能强力驱动
Strongly Powered by AbleSci AI