PID控制器
强化学习
计算机科学
控制理论(社会学)
多智能体系统
控制器(灌溉)
衍生工具(金融)
控制(管理)
数学优化
控制工程
人工智能
工程类
数学
财务
温度控制
农学
经济
生物
标识
DOI:10.1016/j.isatra.2022.06.026
摘要
This paper develops a novel Proportional-Integral-Derivative (PID) tuning method for multi-agent systems with a reinforced self-learning capability for achieving the optimal consensus of all agents. Unlike the traditional model-based and data-driven PID tuning methods, the developed PID self-learning method updates the controller parameters by actively interacting with unknown environment, with the outcomes of guaranteed consensus and performance optimization of agents. Firstly, the PID control-based consensus problem of multi-agent systems is formulated. Then, finding the PID gains is converted into solving a nonzero-sum game problem, thus an off-policy Q-learning algorithm with the critic-only structure is proposed to update the PID gains using only data, without the knowledge of dynamics of agents. Finally, simulations are given to verify the effectiveness of the proposed method.
科研通智能强力驱动
Strongly Powered by AbleSci AI