LSTM-Enhanced Deep Reinforcement Learning for Robust Trajectory Tracking Control of Skid-Steer Mobile Robots Under Terra-Mechanical Constraints

强化学习 防滑(空气动力学) 移动机器人 弹道 人工智能 跟踪(教育) 机器人 计算机科学 工程类 控制工程 控制理论(社会学) 控制(管理) 机械工程 物理 教育学 心理学 天文
作者
José Alcayaga Alcayaga,Oswaldo Menéndez,Miguel Torres‐Torriti,Juan Pablo Vásconez,Tito Arévalo-Ramirez,Alvaro Prado
出处
期刊:Robotics [MDPI AG]
卷期号:14 (6): 74-74 被引量:7
标识
DOI:10.3390/robotics14060074
摘要

Autonomous navigation in mining environments is challenged by complex wheel–terrain interaction, traction losses caused by slip dynamics, and sensor limitations. This paper investigates the effectiveness of Deep Reinforcement Learning (DRL) techniques for the trajectory tracking control of skid-steer mobile robots operating under terra-mechanical constraints. Four state-of-the-art DRL algorithms, i.e., Proximal Policy Optimization (PPO), Deep Deterministic Policy Gradient (DDPG), Twin Delayed DDPG (TD3), and Soft Actor–Critic (SAC), are selected to evaluate their ability to generate stable and adaptive control policies under varying environmental conditions. To address the inherent partial observability in real-world navigation, this study presents an original approach that integrates Long Short-Term Memory (LSTM) networks into DRL-based controllers. This allows control agents to retain and leverage temporal dependencies to infer unobservable system states. The developed agents were trained and tested in simulations and then assessed in field experiments under uneven terrain and dynamic model parameter changes that lead to traction losses in mining environments, targeting various trajectory tracking tasks, including lemniscate and squared-type reference trajectories. This contribution strengthens the robustness and adaptability of DRL agents by enabling better generalization of learned policies compared with their baseline counterparts, while also significantly improving trajectory tracking performance. In particular, LSTM-based controllers achieved reductions in tracking errors of 10%, 74%, 21%, and 37% for DDPG-LSTM, PPO-LSTM, TD3-LSTM, and SAC-LSTM, respectively, compared with their non-recurrent counterparts. Furthermore, DDPG-LSTM and TD3-LSTM reduced their control effort through the total variation in control input by 15% and 20% compared with their respective baseline controllers, respectively. Findings from this work provide valuable insights into the role of memory-augmented reinforcement learning for robust motion control in unstructured and high-uncertainty environments.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
刚刚
PYX完成签到,获得积分10
刚刚
Rocket发布了新的文献求助10
1秒前
Lynn发布了新的文献求助10
1秒前
李爱国应助guuuu采纳,获得10
1秒前
2秒前
烦躁先生完成签到,获得积分20
2秒前
xcuwlj发布了新的文献求助10
2秒前
吃掉所有烦恼完成签到,获得积分10
3秒前
隐形的邦布完成签到,获得积分10
3秒前
蛋子s发布了新的文献求助10
4秒前
mylove应助medlive2020采纳,获得10
4秒前
4秒前
zzz发布了新的文献求助10
4秒前
古月博士发布了新的文献求助10
5秒前
6秒前
6秒前
7秒前
7秒前
7秒前
7秒前
7秒前
哲999完成签到,获得积分10
7秒前
橘子发布了新的文献求助10
7秒前
优雅夜雪完成签到 ,获得积分10
8秒前
罗城完成签到,获得积分10
8秒前
冬瓜完成签到,获得积分10
8秒前
结实荧荧完成签到,获得积分10
9秒前
10秒前
daigang完成签到,获得积分10
10秒前
great7701完成签到,获得积分10
11秒前
灯灯发布了新的文献求助10
11秒前
medlive2020完成签到,获得积分10
11秒前
11秒前
小二郎应助woaihaohao采纳,获得30
12秒前
12秒前
尼古拉斯发布了新的文献求助10
12秒前
rapkat1221发布了新的文献求助10
12秒前
yes完成签到 ,获得积分10
13秒前
董晏殊完成签到,获得积分10
13秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Basic And Clinical Science Course 2025-2026 3000
人脑智能与人工智能 1000
花の香りの秘密―遺伝子情報から機能性まで 800
Process Plant Design for Chemical Engineers 400
Principles of Plasma Discharges and Materials Processing, 3rd Edition 400
Signals, Systems, and Signal Processing 400
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5612666
求助须知:如何正确求助?哪些是违规求助? 4697645
关于积分的说明 14895132
捐赠科研通 4734084
什么是DOI,文献DOI怎么找? 2546628
邀请新用户注册赠送积分活动 1510619
关于科研通互助平台的介绍 1473462