激光雷达
计算机视觉
人工智能
融合
计算机科学
目标检测
遥感
对象(语法)
传感器融合
地理
分割
语言学
哲学
作者
B. Zhang,Yixin Wang,Chengbiao Zhang,Junzhao Jiang,Xiang Luo,Xinyu Wang,Yangyang Zhang,Zhongzheng Liu,Gan Shen,Yunsheng Ye,Ping Jiang
标识
DOI:10.1177/09544070251327229
摘要
Foggy environments present significant challenges to autonomous driving owing to the effects of attenuation and backscattering, which often compromise the performance of LiDAR-camera fusion-based perception systems. In this study, we introduce FogFusion, a novel 3D object detection network specifically designed to operate effectively under foggy conditions by leveraging a synergistic camera-LiDAR fusion approach. Our approach integrates a Depth Completion network with Fog Convolution (DCFC) to generate virtual point clouds that enhance the original sparse LiDAR data. These enhanced point clouds are then processed using a Flexible Cylindrical Voxel (FCV) encoding method. To ensure robust multi-modal feature integration, we employ a Cylindrical Fusion Module (CFM) during the fusion process. Experimental evaluations on the KITTI and KITTI-C datasets reveal that FogFusion improves detection performance in foggy conditions by at least 3.32% compared to the baseline model and surpasses the performance of advanced 3D object detection models. These results highlight FogFusion’s potential to significantly enhance the environmental perception capabilities of autonomous vehicles operating in foggy weather conditions.
科研通智能强力驱动
Strongly Powered by AbleSci AI