激光雷达
雷达
传感器融合
计算机科学
比例(比率)
人工智能
点云
目标检测
算法
计算机视觉
模式识别(心理学)
遥感
物理
地理
量子力学
电信
作者
Li Wang,Xinyu Zhang,Jun Li,Baowei Xv,Rong Fu,Haifeng Chen,Lei Yang,Dafeng Jin,Lijun Zhao
出处
期刊:IEEE Transactions on Vehicular Technology
[Institute of Electrical and Electronics Engineers]
日期:2022-12-19
卷期号:72 (5): 5628-5641
被引量:21
标识
DOI:10.1109/tvt.2022.3230265
摘要
Multi-modal fusion overcomes the inherent limitations of single-sensor perception in 3D object detection of autonomous driving. The fusion of 4D Radar and LiDAR can boost the detection range and more robust. Nevertheless, different data characteristics and noise distributions between two sensors hinder performance improvement when directly integrating them. Therefore, we are the first to propose a novel fusion method termed $M^{2}$ -Fusion for 4D Radar and LiDAR, based on Multi-modal and Multi-scale fusion. To better integrate two sensors, we propose an Interaction-based Multi-Modal Fusion (IMMF) method utilizing a self-attention mechanism to learn features from each modality and exchange intermediate layer information. Specific to the current single-resolution voxel division's precision and efficiency balance problem, we also put forward a Center-based Multi-Scale Fusion (CMSF) method to first regress the center points of objects and then extract features in multiple resolutions. Furthermore, we present a data preprocessing method based on Gaussian distribution that effectively decreases data noise to reduce errors caused by point cloud divergence of 4D Radar data in the $x$ - $z$ plane. To evaluate the proposed fusion method, a series of experiments were conducted using the Astyx HiRes 2019 dataset, including the calibrated 4D Radar and 16-line LiDAR data. The results demonstrated that our fusion method compared favorably with state-of-the-art algorithms. When compared to PointPillars, our method achieves mAP (mean average precision) increases of 5.64 $\%$ and 13.57 $\%$ for 3D and BEV (bird's eye view) detection of the car class at a moderate level, respectively.
科研通智能强力驱动
Strongly Powered by AbleSci AI