人工智能
激光雷达
计算机视觉
视觉里程计
计算机科学
里程计
特征提取
特征(语言学)
稳健性(进化)
同时定位和映射
单眼
水准点(测量)
融合
情态动词
模式识别(心理学)
遥感
地理
机器人
移动机器人
基因
哲学
生物化学
语言学
化学
高分子化学
大地测量学
标识
DOI:10.1109/tiv.2022.3215141
摘要
In this paper, we present a novel multi-sensor fusion framework for tightly coupled monocular visual-LiDAR odometry and mapping. Compared to previous visual-LiDAR fusion frameworks, our proposed framework leverages more constraints among LiDAR features and visual features and integrates that in a tightly coupled approach. Specifically, the framework starts with a preprocess module which contains LiDAR feature extraction, visual feature extraction and tracking, and visual feature depth recover. Then a frame-to-frame odometry module is established by fusing visual feature tracking and LiDAR feature match between frames, aiming to provide a coarse pose estimation for next module. Finally, to refine the pose and build a multi-modal map, we introduce a multi-modal mapping module that tightly couple multi-modal feature constraints by matching or registering multi-modal features to multi-modal map. In addition, the proposed fusion framework also functions well in sensor-degraded environment (texture-less or structure-less), which increases its robustness. The effectiveness and performance of the proposed fusion framework are demonstrated and evaluated on the public KITTI odometry benchmark, and results show that our proposed fusion framework achieves comparable performance compared with the state-of-the-art visual-LiDAR odometry frameworks.
科研通智能强力驱动
Strongly Powered by AbleSci AI