弹道
强化学习
计算机科学
移动机器人
避碰
模型预测控制
运动规划
机器人
人工智能
碰撞
跟踪(教育)
路径(计算)
控制(管理)
物理
程序设计语言
心理学
天文
计算机安全
教育学
作者
Ze Zhang,Yao Cai,Kristian Ceder,Arvid Enliden,Ossian Eriksson,Soleil Kylander,R. Sridhara,Knut Åkesson
标识
DOI:10.1109/case56687.2023.10260515
摘要
In this paper, we present an efficient approach to real-time collision-free navigation for mobile robots. By integrating deep reinforcement learning with model predictive control, our aim is to achieve both collision avoidance and computational efficiency. The methodology begins with training a preliminary agent using deep Q-learning, enabling it to generate actions for next time steps. Instead of executing these actions, a reference trajectory is generated based on them, which avoids obstacles present on the original reference path. Subsequently, this local trajectory is employed within an MPC trajectory-tracking framework to provide collision-free guidance for the mobile robot. Experimental results demonstrate that the proposed DQN-MPC hybrid approach outperforms pure MPC in terms of time efficiency and solution quality.
科研通智能强力驱动
Strongly Powered by AbleSci AI