计算机科学
点云
激光雷达
人工智能
分割
目标检测
任务(项目管理)
特征(语言学)
学习迁移
域适应
计算机视觉
RGB颜色模型
适应(眼睛)
领域(数学分析)
模式识别(心理学)
集合(抽象数据类型)
训练集
遥感
哲学
分类器(UML)
数学分析
经济
管理
数学
地质学
物理
程序设计语言
语言学
光学
作者
Christoph B. Rist,Markus Enzweiler,Dariu M. Gavrila
标识
DOI:10.1109/ivs.2019.8814047
摘要
A considerable amount of annotated training data is necessary to achieve state-of-the-art performance in perception tasks using point clouds. Unlike RGB-images, LiDAR point clouds captured with different sensors or varied mounting positions exhibit a significant shift in their input data distribution. This can impede transfer of trained feature extractors between datasets as it degrades performance vastly. We analyze the transferability of point cloud features between two different LiDAR sensor set-ups (32 and 64 vertical scanning planes with different geometry). We propose a supervised training methodology to learn transferable features in a pre-training step on LiDAR datasets that are heterogeneous in their data and label domains. In extensive experiments on object detection and semantic segmentation in a multi-task setup we analyze the performance of our network architecture under the impact of a change in the input data domain. We show that our pre-training approach effectively increases performance for both target tasks at once without having an actual multi-task dataset available for pre-training.
科研通智能强力驱动
Strongly Powered by AbleSci AI