计算机科学
避障
机器人
数学优化
路径(计算)
趋同(经济学)
运动规划
一般化
人工智能
功能(生物学)
解耦(概率)
移动机器人
数学
控制工程
经济增长
进化生物学
生物
工程类
数学分析
经济
程序设计语言
作者
Yuwan Gu,Zhaoqin Zhu,Jidong Lv,Lin Shi,Zhenjie Hou,Shoukun Xu
标识
DOI:10.1007/s40747-022-00948-7
摘要
Abstract In order to achieve collision-free path planning in complex environment, Munchausen deep Q-learning network (M-DQN) is applied to mobile robot to learn the best decision. On the basis of Soft-DQN, M-DQN adds the scaled log-policy to the immediate reward. The method allows agent to do more exploration. However, the M-DQN algorithm has the problem of slow convergence. A new and improved M-DQN algorithm (DM-DQN) is proposed in the paper to address the problem. First, its network structure was improved on the basis of M-DQN by decomposing the network structure into a value function and an advantage function, thus decoupling action selection and action evaluation and speeding up its convergence, giving it better generalization performance and enabling it to learn the best decision faster. Second, to address the problem of the robot’s trajectory being too close to the edge of the obstacle, a method of using an artificial potential field to set a reward function is proposed to drive the robot’s trajectory away from the vicinity of the obstacle. The result of simulation experiment shows that the method learns more efficiently and converges faster than DQN, Dueling DQN and M-DQN in both static and dynamic environments, and is able to plan collision-free paths away from obstacles.
科研通智能强力驱动
Strongly Powered by AbleSci AI