激光雷达
惯性测量装置
计算机科学
稳健性(进化)
人工智能
初始化
计算机视觉
里程计
遥感
机器人
地质学
移动机器人
生物化学
基因
化学
程序设计语言
作者
Tianci Wen,Yongchun Fang,Biao Lu,Xuebo Zhang,Chaoquan Tang
出处
期刊:IEEE robotics and automation letters
日期:2024-01-18
卷期号:9 (3): 2399-2406
被引量:9
标识
DOI:10.1109/lra.2024.3355778
摘要
In this letter, we propose a tightly coupled LiDAR-inertial-visual (LIV) state estimator termed LIVER, which achieves robust and accurate localization and mapping in underground environments. LIVER starts with an effective strategy for LIV synchronization. A robust initialization process that integrates LiDAR, vision, and IMU is realized. A tightly coupled, nonlinear optimization-based method achieves highly accurate LiDAR-inertial-visual odometry (LIVO) by fusing LiDAR, visual, and IMU information. We consider scenarios in underground environments that are unfriendly to LiDAR and cameras. A visual-IMU-assisted method enables the evaluation and handling of LiDAR degeneracy. A deep neural network is introduced to eliminate the impact of poor lighting conditions on images. We verify the performance of the proposed method by comparing it with the state-of-the-art methods through public datasets and real-world experiments, including underground mines. In underground mines test, tightly coupled methods without degeneracy handling lead to failure due to self-similar areas (affecting LiDAR) and poor lighting conditions (affecting vision). In these conditions, our degeneracy handling approach successfully eliminates the impact of disturbances on the system.
科研通智能强力驱动
Strongly Powered by AbleSci AI