期限(时间)
人工智能
计算机视觉
计算机科学
对象(语法)
天文
物理
作者
Amanda Adkins,Taijing Chen,Joydeep Biswas
出处
期刊:IEEE robotics and automation letters
日期:2024-02-07
卷期号:9 (3): 2909-2916
被引量:11
标识
DOI:10.1109/lra.2024.3363534
摘要
Robots responsible for tasks over long time scales must be able to localize consistently and scalably amid geometric, viewpoint, and appearance changes. Existing visual SLAM approaches rely on low-level feature descriptors that are not robust to such environmental changes and result in large map sizes that scale poorly over long-term deployments. In contrast, object detections are robust to environmental variations and lead to more compact representations, but most object-based SLAM systems target short-term indoor deployments with close objects. In this letter, we introduce ObVi-SLAM to overcome these challenges by leveraging the best of both approaches. ObVi-SLAM uses low-level visual features for high-quality short-term visual odometry; and to ensure global, long-term consistency, ObVi-SLAM builds an uncertainty-aware long-term map of persistent objects and updates it after every deployment. By evaluating ObVi-SLAM on data from 16 deployment sessions spanning different weather and lighting conditions, we empirically show that ObVi-SLAM generates accurate localization estimates consistent over long time scales in spite of varying appearance conditions.
科研通智能强力驱动
Strongly Powered by AbleSci AI