强化学习
弹道
计算机科学
控制(管理)
多智能体系统
人工智能
数学优化
拓扑(电路)
控制理论(社会学)
分布式计算
工程类
数学
物理
天文
电气工程
作者
Chao Pan,Xiaohong Nian,Xunhua Dai,Haibo Wang,Hongyun Xiong
出处
期刊:Lecture notes in electrical engineering
日期:2023-01-01
卷期号:: 1149-1159
标识
DOI:10.1007/978-981-99-0479-2_104
摘要
This paper is based on the multi-agent deep deterministic policy gradient (MADDPG) deep reinforcement learning algorithm, combined with the Leader-Follow method to complete the multi-agent circular formation control problem, considering the distance constraints and angle constraints between the agents. Overcome the problem that it is difficult to accurately model objects in previous control methods, and do not need to care about network topology, system order and other preconditions. In addition, for the formation movement problem, we predefine a virtual leader that moves according to a random curve trajectory, and adopt a two-stage training method. Based on the circular formation behavior strategy, each agent continues to train to follow the virtual leader. Simulation experiments verify the effectiveness of the algorithm.
科研通智能强力驱动
Strongly Powered by AbleSci AI