计算机科学
人工智能
分割
计算机视觉
稳健性(进化)
运动规划
运动(物理)
传感器融合
机器人
基因
生物化学
化学
作者
Gustavo Salazar-Gomez,Wenqian Liu,Manuel Diaz-Zapata,David Sierra-Gonzalez,Christian Laugier
出处
期刊:Cornell University - arXiv
日期:2023-11-09
标识
DOI:10.48550/arxiv.2311.05319
摘要
In autonomous driving, addressing occlusion scenarios is crucial yet challenging. Robust surrounding perception is essential for handling occlusions and aiding motion planning. State-of-the-art models fuse Lidar and Camera data to produce impressive perception results, but detecting occluded objects remains challenging. In this paper, we emphasize the crucial role of temporal cues by integrating them alongside these modalities to address this challenge. We propose a novel approach for bird's eye view semantic grid segmentation, that leverages sequential sensor data to achieve robustness against occlusions. Our model extracts information from the sensor readings using attention operations and aggregates this information into a lower-dimensional latent representation, enabling thus the processing of multi-step inputs at each prediction step. Moreover, we show how it can also be directly applied to forecast the development of traffic scenes and be seamlessly integrated into a motion planner for trajectory planning. On the semantic segmentation tasks, we evaluate our model on the nuScenes dataset and show that it outperforms other baselines, with particularly large differences when evaluating on occluded and partially-occluded vehicles. Additionally, on motion planning task we are among the early teams to train and evaluate on nuPlan, a cutting-edge large-scale dataset for motion planning.
科研通智能强力驱动
Strongly Powered by AbleSci AI