强化学习
非线性系统
比例(比率)
控制理论(社会学)
计算机科学
跟踪(教育)
控制(管理)
钢筋
控制工程
人工智能
工程类
物理
心理学
教育学
量子力学
结构工程
作者
Xiaomin Liu,Gonghe Li,Linna Zhou,Chunyu Yang,Xinkai Chen
标识
DOI:10.1109/tii.2023.3292970
摘要
In this article, based upon reinforcement learning (RL) and reduced control techniques, an ${H}_{\infty }$ output tracking control method is represented for nonlinear two-time-scale industrial systems with external disturbances and unknown dynamics. First, the original ${H}_{\infty }$ output tracking problem is transformed into a reduced problem of the augmented error system. Based on zero-sum game idea, the Nash equilibrium solution is given and the tracking Hamilton–Jacobi–Isaacs (HJI) equation is established. Then, to handle the issue of unmeasurable states of the virtual reduced system, full-order system state data are collected to reconstruct the reduced system states, and the model-free RL algorithm is proposed to solve the tracking HJI equation. Next, the algorithm implementation is given under the actor–critic–disturbance framework. It is proved that the control policy obtained from reconstructed state data can make the augmented error system asymptotically stable and satisfy the L $_{\mathbf{2}}$ gain condition. Finally, the effectiveness of the proposed method is illustrated by the permanent-magnet synchronous motor experiment.
科研通智能强力驱动
Strongly Powered by AbleSci AI