电阻随机存取存储器
计算机科学
量化(信号处理)
横杆开关
人工神经网络
架空(工程)
逐次逼近ADC
计算机硬件
电子工程
人工智能
算法
电容器
电气工程
工程类
电压
电信
操作系统
作者
Azat Azamat,Faaiz Asim,Jintae Kim,Jongeun Lee
标识
DOI:10.1109/tcad.2023.3294461
摘要
While ReRAM (Resistive Random-Access Memory) crossbar arrays have the potential to significantly accelerate DNN (Deep Neural Network) training through fast and lowcost matrix-vector multiplication, peripheral circuits like ADCs (analog-to-digital converters) create a high overhead. These ADCs consume over half of the chip power and a considerable portion of the chip cost. To address this challenge, we propose advanced quantization techniques that can significantly reduce the ADC overhead of ReRAM crossbar arrays. Our methodology interprets ADC as a quantization mechanism, allowing us to scale the range of ADC input optimally along with the weight parameters of a DNN, resulting in multiple-bit reduction in ADC precision. This approach reduces ADC size and power consumption by several times, and it is applicable to any DNN type (binarized or multi-bit) and any ReRAM crossbar array size. Additionally, we propose ways to minimize the overhead of the digital scaler, which is an essential part of our scheme and sometimes required. Our experimental results using ResNet-18 on the ImageNet dataset demonstrate that our method can reduce the size of the ADC by 32 times compared to ISAAC with only a minimal accuracy loss degradation of 0.24 evaluation results in the presence of ReRAM non-ideality (such as stuck-at fault). IEEE
科研通智能强力驱动
Strongly Powered by AbleSci AI