激光雷达
人工智能
同时定位和映射
计算机科学
计算机视觉
测距
点云
非参数统计
由运动产生的结构
运动(物理)
遥感
地理
数学
移动机器人
机器人
统计
电信
作者
Joohyun Park,Younggun Cho,Young-Sik Shin
标识
DOI:10.1109/tits.2022.3204917
摘要
In urban environments, simultaneous localization and mapping (SLAM) are essential for autonomous driving. Most light detection and ranging (LiDAR) SLAM methodologies have been developed for relatively static environments, despite real-world environments having many dynamic objects such as vehicles, bicycles, and pedestrians. This paper proposes an efficient and robust LiDAR SLAM. Our SLAM framework leverages the estimated background model to achieve robust motion estimation in dynamic urban environments. Based on probabilistic object estimation, the dynamic removal module estimates a nonparametric background model to recognize dynamic objects. This module estimates the probability of the difference of the range values from the accumulated LiDAR frames. Then, dynamic objects are removed by adapting the sensor velocity from the estimated ego-motion. In the local mapping module, our method optimizes the LiDAR motion considering the dynamic characteristics of LiDAR point clouds. Finally, the proposed method results in a global map with static point clouds and accurate LiDAR motion with global pose optimization. We tested the proposed method on the well-known public dataset (KITTI) and the custom dataset with complex environments, including various moving objects. Comparisons with state-of-the-art (SOTA) methods demonstrate that the our approach is more robust and efficient. For example, the proposed method performed an average 0.63% and $0.18^{\circ }/100\;m$ errors on the KITTI dataset with $0.96ms$ processing time that convinces real-time processing.
科研通智能强力驱动
Strongly Powered by AbleSci AI