人工智能
计算机视觉
稳健性(进化)
计算机科学
激光雷达
点云
像素
特征(语言学)
校准
残余物
束流调整
特征提取
摄像机切除
遥感
算法
数学
摄影测量学
地理
生物化学
化学
语言学
哲学
统计
基因
作者
Shengjun Tang,Yuqi Feng,Junjie Huang,Xiaoming Li,Zhihan Lv,Yuhong Feng,Weixi Wang
标识
DOI:10.1109/tits.2023.3328062
摘要
With the rapid development of autonomous driving and SLAM technology, the perception system of a vehicle heavily relies on laser and image sensors to capture the real-world scenario and avoid obstacles autonomously. To achieve accurate and robust multi-sensor fusion computation, high-precision extrinsic calibration of camera and laser scanner is a necessary requirement. Traditional multi-sensor calibration methods based on manual features rely on specific scenarios and may not provide feature information over long distances. In this paper, we present a novel approach for robustly calibrating the extrinsic parameters of a solid-state(SS) lidar-camera system in a natural environment. Our proposed method begins with obtaining robust line feature information. we first innovatively employ a super-voxel clustering method to extract global 3D line features from the complete point cloud and then back-project these 3D line features into 2D space. Afterward, a transformer-based edge detection network, EDTER, is used to detect the edge features and estimate the probability pixel-by-pixel. To consider the uncertainty of two-dimensional line features and the inconsistency of residuals at different distances, we construct a line feature weight model for line feature residual calculation. Finally, we minimize the residual errors using least squares optimization to recover the relative pose of the camera and the lidar sensor. We conducted a performance study to compare our proposed method against existing targetless calibration methods on various natural scenarios. The experimental results demonstrate that our proposed method achieves higher robustness, accuracy, and consistency, making it suitable for real-world applications.
科研通智能强力驱动
Strongly Powered by AbleSci AI