人工智能
里程计
计算机视觉
计算机科学
稳健性(进化)
初始化
RGB颜色模型
惯性测量装置
视觉里程计
直线(几何图形)
数学
机器人
生物化学
基因
移动机器人
几何学
化学
程序设计语言
作者
Pengfei Gu,Ziyang Meng
出处
期刊:IEEE robotics and automation letters
日期:2023-04-25
卷期号:8 (6): 3542-3549
被引量:4
标识
DOI:10.1109/lra.2023.3270033
摘要
Vision-based localization is an essential problem for autonomous systems while the performance of vision-based odometry degrades in the challenging scenario. This letter presents S-VIO, an RGB-D visual inertial odometry (VIO) which fully uses multi-sensor measurements (i.e. depth, RGB and IMU), heterogeneous landmarks (i.e. points, lines and planes) and structural regularity of the environment to obtain a robust and accurate localization result. In order to detect the underlying structural regularity of the environment, a two-step Atlanta world inference method is proposed. Leveraging the gravity direction estimated by the VIO system, the proposed algorithm first generates horizontal Atlanta axis hypotheses from a set of recently optimized plane landmarks. Then the following plane landmarks and line clusters are used to filter out the occasionally observed axes based on the persistence of the observation. The remaining axes will survive and be saved in the Atlanta map for future re-observations. Particularly, an efficient mined-and-stabbed (MnS) method is applied to classify the structural line and extract the vanishing point from each line cluster. In addition, a closed-form initialization method for the structural line feature is proposed, which leverages the known direction to obtain a better initial estimation. Integrated with the above novelties, S-VIO is tested on two public real-world RGB-D inertial datasets. Experiments demonstrate that S-VIO has better accuracy and robustness compared to state-of-the-art VIO and RGB-D VIO algorithms.
科研通智能强力驱动
Strongly Powered by AbleSci AI