激光雷达
点云
计算机科学
人工智能
分割
水准点(测量)
计算机视觉
融合
点(几何)
目标检测
遥感
地理
数学
语言学
几何学
大地测量学
哲学
作者
Sourabh Vora,Alex Lang,Bassam Helou,Oscar Beijbom
出处
期刊:Cornell University - arXiv
日期:2019-11-22
标识
DOI:10.48550/arxiv.1911.10150
摘要
Camera and lidar are important sensor modalities for robotics in general and self-driving cars in particular. The sensors provide complementary information offering an opportunity for tight sensor-fusion. Surprisingly, lidar-only methods outperform fusion methods on the main benchmark datasets, suggesting a gap in the literature. In this work, we propose PointPainting: a sequential fusion method to fill this gap. PointPainting works by projecting lidar points into the output of an image-only semantic segmentation network and appending the class scores to each point. The appended (painted) point cloud can then be fed to any lidar-only method. Experiments show large improvements on three different state-of-the art methods, Point-RCNN, VoxelNet and PointPillars on the KITTI and nuScenes datasets. The painted version of PointRCNN represents a new state of the art on the KITTI leaderboard for the bird's-eye view detection task. In ablation, we study how the effects of Painting depends on the quality and format of the semantic segmentation output, and demonstrate how latency can be minimized through pipelining.
科研通智能强力驱动
Strongly Powered by AbleSci AI