计算机科学
尖峰神经网络
记忆电阻器
强化学习
初始化
神经形态工程学
人工神经网络
人工智能
水准点(测量)
Spike(软件开发)
机器学习
软件工程
大地测量学
地理
电气工程
程序设计语言
工程类
作者
Danila Vlasov,А. А. Миннеханов,Roman Rybka,Yury A. Davydov,Alexander Sboev,Alexey Serenko,Aleksandr I. Iliasov,В. А. Демин
标识
DOI:10.1016/j.neunet.2023.07.031
摘要
Neural networks implemented in memristor-based hardware can provide fast and efficient in-memory computation, but traditional learning methods such as error back-propagation are hardly feasible in it. Spiking neural networks (SNNs) are highly promising in this regard, as their weights can be changed locally in a self-organized manner without the demand for high-precision changes calculated with the use of information almost from the entire network. This problem is rather relevant for solving control tasks with neural-network reinforcement learning methods, as those are highly sensitive to any source of stochasticity in a model initialization, training, or decision-making procedure. This paper presents an online reinforcement learning algorithm in which the change of connection weights is carried out after processing each environment state during interaction-with-environment data generation. Another novel feature of the algorithm is that it is applied to SNNs with memristor-based STDP-like learning rules. The plasticity functions are obtained from real memristors based on poly-p-xylylene and CoFeB-LiNbO3 nanocomposite, which were experimentally assembled and analyzed. The SNN is comprised of leaky integrate-and-fire neurons. Environmental states are encoded by the timings of input spikes, and the control action is decoded by the first spike. The proposed learning algorithm solves the Cart-Pole benchmark task successfully. This result could be the first step towards implementing a real-time agent learning procedure in a continuous-time environment that can be run on neuromorphic systems with memristive synapses.
科研通智能强力驱动
Strongly Powered by AbleSci AI