避障
强化学习
钢筋
计算机科学
端到端原则
回避学习
障碍物
人工智能
航空学
神经科学
心理学
工程类
地理
移动机器人
机器人
社会心理学
考古
作者
Mohammed B. Mohiuddin,Igor Boiko,Vu Phi Tran,Matthew Garratt,Ayman M. Abdallah,Yahya Zweiri
标识
DOI:10.1038/s41598-025-18220-6
摘要
This study introduces an end-to-end Reinforcement Learning (RL) approach for controlling Unmanned Aerial Vehicles (UAVs) with slung loads, addressing both navigation and obstacle avoidance in real-world environments. Unlike traditional methods that rely on separate flight controllers, path planners, and obstacle avoidance systems, our unified RL strategy seamlessly integrates these components, reducing both computational and design complexities while maintaining synchronous operation and optimal goal-tracking performance without the need for pre-training in various scenarios. Additionally, the study explores a reduced observation space model, referred to as CompactRL-8, which utilizes only eight observations and excludes noisy load swing rate measurements. This approach differs from most full-state observation RL methods, which typically include these rates. CompactRL-8 outperforms the full ten-observation model, demonstrating a 58.79% increase in speed and a ten-fold improvement in obstacle clearance. Our method also surpasses the state-of-the-art adaptive control methods, showing an 8% enhancement in path efficiency and a four-fold increase in load swing stability. Utilizing a detailed system model, we achieve successful Sim2Real transfer without time-consuming re-tuning, confirming the method's practical applicability. This research advances RL-based UAV slung-load system control, fostering the development of more efficient and reliable autonomous aerial systems for applications like urban load transport. A video demonstration of the experiments can be found at https://youtu.be/GtGHhOCmy3M .
科研通智能强力驱动
Strongly Powered by AbleSci AI