人工智能
计算机视觉
约束(计算机辅助设计)
对象(语法)
分割
计算机科学
图像(数学)
同时定位和映射
图像分割
模式识别(心理学)
数学
机器人
移动机器人
几何学
作者
Peng Liao,Liheng Chen,Jialiang Tang,Zhengyong Feng
摘要
ABSTRACT Most existing vision‐based simultaneous localization and mapping systems and their variants still assume that the observation is absolutely static and cannot work well in dynamic environments. In this paper, we propose a direct geometrically constrained SLAM method based on target detection and depth image segmentation, named YGDD‐SLAM. The YGDD‐SLAM system can work robustly, accurately, and continuously in highly dynamic environments. The method first acquires static and potential dynamic feature points in the current frame through a target detection network. Then, dynamic targets are identified by combining the geometric change relationship between static and potential dynamic feature points between adjacent frames. To improve the accuracy of the dynamic judgment, the motion probability of the potential dynamic target in the past few frames is also used for judgment. Subsequently, the dynamic object regions at the pixel level are segmented out based on the double‐peak feature of the gray‐scale histogram of the dynamic target region in the depth image, which ultimately achieves the accurate deletion of all dynamic features points. Meanwhile, we validate YGDD‐SLAM on TUM data set and Bonn data set and prove that it significantly improves the localization accuracy and system stability in different types of dynamic environments.
科研通智能强力驱动
Strongly Powered by AbleSci AI