工作区
机器人
计算机科学
弹道
时态逻辑
运动规划
线性时序逻辑
人工智能
计算机视觉
运动(物理)
光学(聚焦)
实时计算
控制工程
模拟
工程类
物理
光学
程序设计语言
天文
作者
Zhangli Zhou,Ziyang Chen,Mingyu Cai,Zhijun Li,Zhen Kan,Chun‐Yi Su
出处
期刊:IEEE Transactions on Industrial Electronics
[Institute of Electrical and Electronics Engineers]
日期:2024-06-01
卷期号:71 (6): 5983-5992
被引量:1
标识
DOI:10.1109/tie.2023.3299048
摘要
Temporal logic-based motion planning has been extensively studied to address complex robotic tasks. However, existing works primarily focus on static environments or assume the robot has full observations of the environment. This limits their practical applications since real-world environments are often dynamic, and robots may suffer from partial observations. To tackle these issues, this study proposes a framework for vision-based reactive temporal logic motion planning (V-RTLMP) for robots integrated with LiDAR sensing. The V-RTLMP is designed to perform high-level linear temporal logic (LTL) tasks in unstructured dynamic environments. The framework comprises two modules: offline preplanning and online reactive planning. Given LTL specifications, the preplanning phase generates a reference trajectory over the continuous workspace via sampling-based methods using prior environmental knowledge. The online reactive module dynamically adjusts the robot trajectory based on real-time visual perception to adapt to environmental changes. Extensive numerical simulations and real-world experiments using a quadruped robot demonstrate the effectiveness of the proposed vision-based reactive motion planning.
科研通智能强力驱动
Strongly Powered by AbleSci AI