电阻随机存取存储器
神经形态工程学
计算机科学
MNIST数据库
感知器
人工神经网络
量化(信号处理)
稳健性(进化)
粒度
计算机工程
人工智能
卷积神经网络
机器学习
电子工程
算法
电压
工程类
电气工程
生物化学
化学
基因
操作系统
作者
Qing Yang,Hai Li,Qing Wu
标识
DOI:10.1109/iscas.2018.8351327
摘要
Deep neural networks (DNNs) are tremendously applied in artificial intelligence field. While the performance of DNNs is continuously improved by more complicated and deeper structures, the feasibility of deployment on embedded system remains as a critical problem. Neuromorphic system designs based on resistive random-access memory (ReRAM) provide an opportunity for power-efficient DNN employment. However, it encounters the challenge of limited programming resolution. A quantized training method is proposed in this paper to enhance the performance of neuromorphic systems based on ReRAM. Different from previous methods in which a dedicated regularization term is used in loss function to constrain parameter distribution, our quantized training method deals with training and quantization at the same time to alleviate the impact of limited parameter precision. Models with discrete parameters obtained after training can be directly mapped onto ReRAM devices. We implement experiments on image recognition tasks using multi-layer perceptron (MLP) and convolution neural network (CNN). The results verify that the quantized training method can approximate the accuracy of full-precision training, e.g., a two-layer MLP based on binary ReRAM only decreases the classification accuracy by 0.25% for MNIST dataset. In addition, we carefully investigate and present the importance of layer size under the ReRAM's low programming resolution, the different parameter resolution demands for convolution layer and fully connected layer, and system robustness to ReRAM variations after quantized training. The codes are available at https://github.com/qingyangqing/quantized-rram-net.
科研通智能强力驱动
Strongly Powered by AbleSci AI