计算机科学
感知
特征(语言学)
人工智能
行人检测
行人
激光雷达
融合
特征选择
计算机视觉
传感器融合
点云
适应(眼睛)
目标检测
模式识别(心理学)
工程类
遥感
地理
运输工程
光学
物理
哲学
语言学
生物
神经科学
作者
Donghao Qiao,Farhana Zulkernine
标识
DOI:10.1109/wacv56688.2023.00124
摘要
Cooperative perception allows a Connected Autonomous Vehicle (CAV) to interact with the other CAVs in the vicinity to enhance perception of surrounding objects to increase safety and reliability. It can compensate for the limitations of the conventional vehicular perception such as blind spots, low resolution, and weather effects. An effective feature fusion model for the intermediate fusion methods of cooperative perception can improve feature selection and information aggregation to further enhance the perception accuracy. We propose adaptive feature fusion models with trainable feature selection modules. One of our proposed models Spatial-wise Adaptive feature Fusion (S-AdaFusion) outperforms all other State-of-the-Arts (SO-TAs) on two subsets of the OPV2V dataset: Default CARLA Towns for vehicle detection and the Culver City for domain adaptation. In addition, previous studies have only tested cooperative perception for vehicle detection. A pedestrian, however, is much more likely to be seriously injured in a traffic accident. We evaluate the performance of cooperative perception for both vehicle and pedestrian detection using the CODD dataset. Our architecture achieves higher Average Precision (AP) than other existing models for both vehicle and pedestrian detection on the CODD dataset. The experiments demonstrate that cooperative perception also improves the pedestrian detection accuracy compared to the conventional single vehicle perception process.
科研通智能强力驱动
Strongly Powered by AbleSci AI