视觉里程计
计算机视觉
人工智能
姿势
计算机科学
传感器融合
里程计
融合
移动机器人
机器人
语言学
哲学
作者
Jing Wang,Yibo Wang,Cheng Guo,Shujun Xing,Xing Ye
标识
DOI:10.1109/crc60659.2023.10488546
摘要
Camera pose estimation involves determining the camera's position coordinates and angular rotations around three axes, describing its orientation and location relative to a given scene. While neural networks have made significant strides in camera pose estimation, they are susceptible to issues like motion blur in the scene. This paper addresses the challenge of motion blur caused by camera movement in camera pose estimation by leveraging information from visual odometry, which provides camera motion and pose details. A visual odometry network is designed, and trajectory drift in the network is mitigated by incorporating Long Short-Term Memory (LSTM). The visual odometry information is fused into the camera pose network to enhance pose accuracy. The proposed method demonstrates substantial improvements compared to other approaches.
科研通智能强力驱动
Strongly Powered by AbleSci AI