模型预测控制
强化学习
计算机科学
控制理论(社会学)
约束(计算机辅助设计)
趋同(经济学)
理论(学习稳定性)
控制器(灌溉)
终端(电信)
发电机(电路理论)
非线性系统
方案(数学)
数学优化
控制(管理)
人工智能
机器学习
数学
物理
功率(物理)
经济
数学分析
几何学
生物
电信
量子力学
经济增长
农学
作者
Min Lin,Zhongqi Sun,Yuanqing Xia,Jinhui Zhang
标识
DOI:10.1109/tnnls.2023.3273590
摘要
This article proposes a novel reinforcement learning-based model predictive control (RLMPC) scheme for discrete-time systems. The scheme integrates model predictive control (MPC) and reinforcement learning (RL) through policy iteration (PI), where MPC is a policy generator and the RL technique is employed to evaluate the policy. Then the obtained value function is taken as the terminal cost of MPC, thus improving the generated policy. The advantage of doing so is that it rules out the need for the offline design paradigm of the terminal cost, the auxiliary controller, and the terminal constraint in traditional MPC. Moreover, RLMPC proposed in this article enables a more flexible choice of prediction horizon due to the elimination of the terminal constraint, which has great potential in reducing the computational burden. We provide a rigorous analysis of the convergence, feasibility, and stability properties of RLMPC. Simulation results show that RLMPC achieves nearly the same performance as traditional MPC in the control of linear systems and exhibits superiority over traditional MPC for nonlinear ones.
科研通智能强力驱动
Strongly Powered by AbleSci AI