强化学习
计算机科学
趋同(经济学)
反向
数学优化
最优控制
李雅普诺夫函数
理论(学习稳定性)
国家(计算机科学)
控制(管理)
控制理论(社会学)
算法
数学
人工智能
机器学习
非线性系统
物理
几何学
量子力学
经济
经济增长
作者
Bosen Lian,Vrushabh S. Donge,Frank L. Lewis,Tianyou Chai,Ali Davoudi
标识
DOI:10.1109/tnnls.2022.3186229
摘要
This article proposes a data-driven inverse reinforcement learning (RL) control algorithm for nonzero-sum multiplayer games in linear continuous-time differential dynamical systems. The inverse RL problem in the games is solved by a learner reconstructing the unknown expert players' cost functions from demonstrated expert's optimal state and control input trajectories. The learner, thus, obtains the same control feedback gains and trajectories as the expert, only using data along system trajectories without knowing system dynamics. This article first proposes a model-based inverse RL policy iteration framework that has: 1) policy evaluation step for reconstructing cost matrices using Lyapunov functions; 2) state-reward weight improvement step using inverse optimal control (IOC); and 3) policy improvement step using optimal control. Based on the model-based policy iteration algorithm, this article further develops an online data-driven off-policy inverse RL algorithm without knowing any knowledge of system dynamics or expert control gains. Rigorous convergence and stability analysis of the algorithms are provided. It shows that the off-policy inverse RL algorithm guarantees unbiased solutions while probing noises are added to satisfy the persistence of excitation (PE) condition. Finally, two different simulation examples validate the effectiveness of the proposed algorithms.
科研通智能强力驱动
Strongly Powered by AbleSci AI