同时定位和映射
芯(光纤)
计算机科学
方向(向量空间)
人工智能
钥匙(锁)
职位(财务)
姿势
计算机视觉
机器人
移动机器人
数学
计算机安全
电信
几何学
财务
经济
作者
Dupeng Cai,Ruoqing Li,Zhuhua Hu,Junlin Lu,Shijiang Li,Yaochi Zhao
出处
期刊:Neurocomputing
[Elsevier BV]
日期:2024-04-25
卷期号:590: 127760-127760
被引量:15
标识
DOI:10.1016/j.neucom.2024.127760
摘要
Visual Simultaneous Localization and Mapping (VSLAM) technology has become a key technology in autonomous driving and robot navigation. Relying on camera sensors, VSLAM can provide a richer and more precise perception means, and its advancement has accelerated in recent years. However, current review studies are often limited to in-depth analysis of a specific module and lack a comprehensive review of the entire VSLAM framework. The VSLAM system consists of five core components: (1) The camera sensor module is responsible for capturing visual information about the surrounding environment. (2) The front-end module uses image data to roughly estimate the camera's position and orientation. (3) The back-end module optimizes and processes the pose information estimated by the front-end. (4) The loop detection module is used to correct accumulated errors in the system. (5) The mapping module is responsible for generating environmental maps. This review provides a systematic and comprehensive analysis of the SLAM framework by taking the core components of VSLAM as the entry point. Deep learning brings new development opportunities for VSLAM, but it still needs to solve the problems of data dependence, cost and real-time in practical application. We deeply explore the challenges of combining VSLAM with deep learning and feasible solutions. This review provides a valuable reference for the development of VSLAM. This will help push VSLAM technology to become smarter and more efficient. Thus, it can better meet the needs of future intelligent autonomous systems in multiple fields.
科研通智能强力驱动
Strongly Powered by AbleSci AI