强化学习
运动规划
概率路线图
计算机科学
人工智能
超参数
随机树
机器人
路径(计算)
概率逻辑
一般化
规划师
机器学习
采样(信号处理)
适应性
计算机视觉
数学
数学分析
生态学
滤波器(信号处理)
生物
程序设计语言
作者
Teham Bhuiyan,Linh Kästner,Yifan Hu,Benno Kutschank,Jens Lambrecht
标识
DOI:10.1109/iccre57112.2023.10155608
摘要
Traditionally, collision-free path planning for industrial robots is realized by sampling-based algorithms such as RRT (Rapidly-exploring Random Tree), PRM (Probabilistic Roadmap), etc. Sampling-based algorithms require long computation times, especially in complex environments. Furthermore, the environment in which they are employed needs to be known beforehand. When utilizing these approaches in new environments, a tedious engineering effort in setting hyperparameters needs to be conducted, which is time- and cost-intensive. On the other hand, DRL (Deep Reinforcement Learning) has shown remarkable results in dealing with complex environments, generalizing new problem instances, and solving motion planning problems efficiently. On that account, this paper proposes a Deep-Reinforcement-Learning-based motion planner for robotic manipulators. We propose an easily reproducible method to train an agent in randomized scenarios achieving generalization for unknown environments. We evaluated our model against state-of-the-art sampling- and DRL-based planners in several experiments containing static and dynamic obstacles. Results show the adaptability of our agent in new environments and the superiority in terms of path length and execution time compared to conventional methods. Our code is available on GitHub [1].
科研通智能强力驱动
Strongly Powered by AbleSci AI