强化学习
控制理论(社会学)
计算机科学
李雅普诺夫函数
控制器(灌溉)
理论(学习稳定性)
转换器
Lyapunov稳定性
趋同(经济学)
控制工程
控制(管理)
工程类
电压
人工智能
机器学习
非线性系统
生物
量子力学
电气工程
物理
经济
经济增长
农学
标识
DOI:10.1109/tie.2024.3522491
摘要
Reinforcement learning (RL) has gained popularity in power electronics due to its ability to handle nonlinearities and self-learning characteristics. When properly configured, an RL agent can autonomously learn the optimal control policy by interacting with the converter system. In particular, similar to conventional finite-control-set model predictive control (FCS-MPC), the RL agent can learn the optimal switching strategy for the power converter and achieve desirable control performance. However, the alteration of closed-loop dynamics by the RL controller poses challenges in ensuring and assessing system stability. To address this, the article proposes formulating a Lyapunov function to guide the agent in learning an optimal control policy that enhances desirable control performance while ensuring closed-loop stability. Additionally, the practical stability region of the system is quantified by deriving a compact set regarding the convergence of voltage control error. Finally, the proposed Lyapunov-guided RL controller is validated through a demonstration framework with a practical experimental setup. Both simulation and experimental results confirm the effectiveness of the proposed method.
科研通智能强力驱动
Strongly Powered by AbleSci AI