人工智能
计算机科学
计算机视觉
激光雷达
传感器融合
融合
目标检测
图像融合
遥感
图像(数学)
地理
分割
语言学
哲学
作者
Zhen Shen,Yunze He,Xu Du,Junfeng Yu,Hongjin Wang,Yaonan Wang
出处
期刊:IEEE Sensors Journal
[Institute of Electrical and Electronics Engineers]
日期:2024-01-30
卷期号:24 (6): 8379-8389
被引量:22
标识
DOI:10.1109/jsen.2024.3357826
摘要
In traffic scenes, target detection is impacted by factors such as complex backgrounds, illumination, and mutual occlusion of moving targets, all of which tend to cause the sensors to perform poorly and have a high false detection rate. To solve these challenges, this study proposes a multidata source target detection network integrating target tracking based on the camera and LiDAR fusion (YCANet), which utilizes the improved YOLOv7 and CenterPoint to detect the visible images and point clouds separately and uses the Aggregated Euclidean Distances (AED) as the new metric in the data correlation module for tracking the detection results of the images and point clouds, which effectively improves the correlation robustness and reduces the tracking errors. An optimal matching fusion strategy is presented to merge the detection and tracking results of two sensors for decision matching. The fusion of the camera and LiDAR improves the poor detection results, while the target tracking incorporated into the detection approach reduces the false detection rate. The homemade dataset and the partial ONCE dataset are used for network training and testing, and compared with the other seven algorithms, the experimental results show that our proposed approach better meets the accuracy requirements, and the mAP reaches 83.40% while maintaining the false detection rate of 18.19%.
科研通智能强力驱动
Strongly Powered by AbleSci AI