计算机科学
路径(计算)
运动规划
增强学习
移动机器人
人工智能
算法
机器人
强化学习
计算机网络
作者
Yunjie Zhang,Y.Y. Liu,Yadong Chen,Zhenjian Yang
标识
DOI:10.1088/1402-4896/adb79a
摘要
Abstract This paper addresses challenges in Q-learning for mobile robot path planning, specifically low learning efficiency and slow convergence. An ARE-QL algorithm with an optimized search range is proposed to address these issues. Firstly, the reward function of Q-learning is enhanced. A dynamic continuous reward mechanism, based on heuristic environmental information, is introduced to reduce the robot's search space and improve learning efficiency. Secondly, integrating the pheromone mechanism from the ant colony algorithm introduces a pheromone-guided matrix and path filtering, optimizing the search range and accelerating convergence to the optimal path. Additionally, an adaptive exploration strategy based on state familiarity enhances the algorithm's efficiency and robustness. Simulation results demonstrate that the ARE-QL algorithm outperforms standard Q-learning and other improved algorithms. It achieves faster convergence and higher path quality across various environmental complexities. The ARE-QL algorithm enhances path planning efficiency while demonstrating strong adaptability and robustness, providing new insights and solutions for mobile robot path planning research.
科研通智能强力驱动
Strongly Powered by AbleSci AI