激光雷达
计算机科学
遥感
雷达
情态动词
基于对象
比例(比率)
带宽(计算)
融合
对象(语法)
人工智能
计算机视觉
地质学
电信
地理
地图学
语言学
化学
哲学
高分子化学
作者
Tiezhen Jiang,Rebecca Kang,Qingzhu Li
标识
DOI:10.1088/1361-6501/adafcb
摘要
Abstract In recent years, with the rapid advancement of autonomous driving technology, the requirements for environmental perception tasks have become increasingly important. 4D millimeter-wave radar, an economical and reliable technology, has begun to attract attention. Additionally, LiDAR can measure accurately and is not easily interfered with, so it is also very popular. To overcome the limitations of single-sensor perception, this paper proposes the BSM-NET method, a multi-bandwidth, multi-scale, multi-modal fusion approach for 4D radar and LiDAR. This paper uses image technology to clean point cloud data, reducing errors and noise. BSM-NET consists of two key modules: Multi-Bandwidth Fusion (MBF) and Multi-Scale Fusion (MSF). MBF enhances data quality by capturing point cloud density and addressing issues such as gap filling and noise reduction. MSF improves accuracy and robustness through high-precision calculations. For better integration, we use the RLMF framework, which enables different types of data to learn from each other, allowing us to effectively fuse 80-line LiDAR and 4D radar data. Experimental results demonstrate that BSM-NET significantly outperforms the current state-of-the-art algorithms on Dual-Radar dataset. Compared to M2-Fusion, our fusion method achieves notable performance improvements of 2.04% and 2.21% in medium-difficulty 3D object detection and Bird's Eye View (BEV) detection for the vehicle category, respectively.
科研通智能强力驱动
Strongly Powered by AbleSci AI