支柱
对偶(语法数字)
计算机科学
雷达
目标检测
对象(语法)
雷达成像
人工智能
计算机视觉
电信
模式识别(心理学)
工程类
艺术
文学类
结构工程
作者
Jingzhong Li,Lin Yang,Yuxuan Chen,Yixin Yang,Yue Jin,Kuanta Akiyama
标识
DOI:10.1109/itsc57777.2023.10422406
摘要
3D object detection plays an indispensable role in autonomous driving. Most existing methods for 3D object detection from point clouds are LiDAR-based. However, LiDAR may suffer significant performance degradation from poor environmental conditions, such as rainy and foggy weather. Compared to LiDAR, 4D RaDAR is more robust to various environments and can provide velocity information. Nevertheless, its point cloud is sparser and contains more noise. Thus, existing LiDAR-dependent 3D object detection methods cannot be effectively applied to 4D Radar. To cope with this issue, we propose a new framework to improve the 3D object detection performance of 4D Radar, dubbed Pillar-based Dual Attention Network (PillarDAN). Specifically, PillarDAN builds the Global Pillar Attention (GPA) to enhance the feature extraction capability from sparser 4D Radar point clouds. Meanwhile, the Pillar Feature Attention (PFA) is proposed to focus on the truly effective information, thus suppressing the noise of point clouds. We also present an effective 3D coordinate embedding to improve the position awareness of the bird's-eye-view (BEV) feature map. Experimental results on the Astyx HiRes2019 dataset show our PillarDAN achieves effective performance improvement, which is 3.28% higher in 3D mAP and 3.12% higher in BEV mAP than the previous best method.
科研通智能强力驱动
Strongly Powered by AbleSci AI