记忆电阻器
神经形态工程学
计算机科学
记忆晶体管
卷积神经网络
可扩展性
横杆开关
人工神经网络
冯·诺依曼建筑
计算机体系结构
计算机硬件
电阻随机存取存储器
计算机工程
并行计算
人工智能
深度学习
电子工程
电气工程
工程类
操作系统
数据库
电信
电压
作者
Peng Yao,Huaqiang Wu,Bin Gao,Jianshi Tang,Qingtian Zhang,Wenqiang Zhang,J. Joshua Yang,He Qian
出处
期刊:Nature
[Nature Portfolio]
日期:2020-01-29
卷期号:577 (7792): 641-646
被引量:1695
标识
DOI:10.1038/s41586-020-1942-4
摘要
Memristor-enabled neuromorphic computing systems provide a fast and energy-efficient approach to training neural networks1–4. However, convolutional neural networks (CNNs)—one of the most important models for image recognition5—have not yet been fully hardware-implemented using memristor crossbars, which are cross-point arrays with a memristor device at each intersection. Moreover, achieving software-comparable results is highly challenging owing to the poor yield, large variation and other non-ideal characteristics of devices6–9. Here we report the fabrication of high-yield, high-performance and uniform memristor crossbar arrays for the implementation of CNNs, which integrate eight 2,048-cell memristor arrays to improve parallel-computing efficiency. In addition, we propose an effective hybrid-training method to adapt to device imperfections and improve the overall system performance. We built a five-layer memristor-based CNN to perform MNIST10 image recognition, and achieved a high accuracy of more than 96 per cent. In addition to parallel convolutions using different kernels with shared inputs, replication of multiple identical kernels in memristor arrays was demonstrated for processing different inputs in parallel. The memristor-based CNN neuromorphic system has an energy efficiency more than two orders of magnitude greater than that of state-of-the-art graphics-processing units, and is shown to be scalable to larger networks, such as residual neural networks. Our results are expected to enable a viable memristor-based non-von Neumann hardware solution for deep neural networks and edge computing. A fully hardware-based memristor convolutional neural network using a hybrid training method achieves an energy efficiency more than two orders of magnitude greater than that of graphics-processing units.
科研通智能强力驱动
Strongly Powered by AbleSci AI