激光雷达
计算机视觉
人工智能
计算机科学
同时定位和映射
点云
扩展卡尔曼滤波器
卡尔曼滤波器
深度图
遥感
航程(航空)
融合
传感器融合
测距
移动机器人
图像(数学)
机器人
地理
工程类
电信
语言学
哲学
航空航天工程
作者
Dong Wang,Meng Zhang,Guoxi Li,Rihuang Du,Cun Zhao
标识
DOI:10.1109/wsai51899.2021.9486369
摘要
The existing 2D lidar and depth camera fusion scheme sacrifices the detection range of lidar, and the repeated pose estimation of the two sensors leads to lower map construction efficiency. To solve this problem, a Simultaneous Localization and Mapping (SLAM) method combining lidar and depth camera is proposed. This method was based on Extended Kalman Filter (EKF) to predict the initial pose of the unmanned platform. The Gmapping algorithm was used to obtain the optimized pose of the unmanned platform and lidar map and combined the optimized pose and the depth point cloud to construct the depth camera map. Finally, the lidar map and the depth camera map were fused to achieve high-precision localization and mapping. The experiment shows that this method expands the detection range of lidar, saves the calculation amount of depth camera pose estimation, improves map construction efficiency, and improves map integrity and accuracy at the same time.
科研通智能强力驱动
Strongly Powered by AbleSci AI