强化学习
计算机科学
初始化
人工神经网络
超调(微波通信)
粒子群优化
沉降时间
算法
控制器(灌溉)
非线性系统
适应度函数
人工智能
控制理论(社会学)
机器学习
阶跃响应
遗传算法
控制(管理)
控制工程
工程类
物理
生物
程序设计语言
电信
量子力学
农学
作者
Iuliu Alexandru Zamfirache,Radu Precup,Raul‐Cristian Roman,Emil M. Petriu
标识
DOI:10.1016/j.ins.2021.10.070
摘要
This paper presents a novel Reinforcement Learning (RL)-based control approach that uses a combination of a Deep Q-Learning (DQL) algorithm and a metaheuristic Gravitational Search Algorithm (GSA). The GSA is employed to initialize the weights and the biases of the Neural Network (NN) involved in DQL in order to avoid the instability, which is the main drawback of the traditional randomly initialized NNs. The quality of a particular set of weights and biases is measured at each iteration of the GSA-based initialization using a fitness function aiming to achieve the predefined optimal control or learning objective. The data generated during the RL process is used in training a NN-based controller that will be able to autonomously achieve the optimal reference tracking control objective. The proposed approach is compared with other similar techniques which use different algorithms in the initialization step, namely the traditional random algorithm, the Grey Wolf Optimizer algorithm, and the Particle Swarm Optimization algorithm. The NN-based controllers based on each of these techniques are compared using performance indices specific to optimal control as settling time, rise time, peak time, overshoot, and minimum cost function value. Real-time experiments are conducted in order to validate and test the proposed new approach in the framework of the optimal reference tracking control of a nonlinear position servo system. The experimental results show the superiority of this approach versus the other three competing approaches.
科研通智能强力驱动
Strongly Powered by AbleSci AI