纳什均衡
微分博弈
计算机科学
稳健性(进化)
强化学习
同步(交流)
数学优化
差速器(机械装置)
控制理论(社会学)
数学
控制(管理)
人工智能
工程类
计算机网络
生物化学
化学
频道(广播)
基因
航空航天工程
作者
Yu Shi,Yongzhao Hua,Jianglong Yu,Xiwang Dong,Zhang Ren
标识
DOI:10.1631/fitee.2200001
摘要
This paper studies the multi-agent differential game based problem and its application to cooperative synchronization control. A systematized formulation and analysis method for the multi-agent differential game is proposed and a data-driven methodology based on the reinforcement learning (RL) technique is given. First, it is pointed out that typical distributed controllers may not necessarily lead to global Nash equilibrium of the differential game in general cases because of the coupling of networked interactions. Second, to this end, an alternative local Nash solution is derived by defining the best response concept, while the problem is decomposed into local differential games. An off-policy RL algorithm using neighboring interactive data is constructed to update the controller without requiring a system model, while the stability and robustness properties are proved. Third, to further tackle the dilemma, another differential game configuration is investigated based on modified coupling index functions. The distributed solution can achieve global Nash equilibrium in contrast to the previous case while guaranteeing the stability. An equivalent parallel RL method is constructed corresponding to this Nash solution. Finally, the effectiveness of the learning process and the stability of synchronization control are illustrated in simulation results.
科研通智能强力驱动
Strongly Powered by AbleSci AI