反向传播
计算机科学
人工神经网络
可扩展性
神经形态工程学
梯度下降
随机梯度下降算法
人工智能
计算机硬件
数据库
作者
Eveline R. W. van Doremaele,Tim Stevens,Stijn Ringeling,Simone Spolaor,Marco Fattori,Yoeri van de Burgt
出处
期刊:Science Advances
[American Association for the Advancement of Science]
日期:2024-07-12
卷期号:10 (28)
被引量:2
标识
DOI:10.1126/sciadv.ado8999
摘要
Neural network training can be slow and energy-expensive due to the frequent transfer of weight data between digital memory and processing units. Neuromorphic systems can accelerate neural networks by performing multiply-accumulate operations in parallel using nonvolatile analog memory. However, executing the widely used backpropagation training algorithm in multilayer neural networks requires information—and therefore storage—of the partial derivatives of the weight values preventing suitable and scalable implementation in hardware. Here, we propose a hardware implementation of the backpropagation algorithm that progressively updates each layer using in situ stochastic gradient descent, avoiding this storage requirement. We experimentally demonstrate the in situ error calculation and the proposed progressive backpropagation method in a multilayer hardware-implemented neural network. We confirm identical learning characteristics and classification performance compared to conventional backpropagation in software. We show that our approach can be scaled to large and deep neural networks, enabling highly efficient training of advanced artificial intelligence computing systems.
科研通智能强力驱动
Strongly Powered by AbleSci AI