点云
计算机科学
分割
人工智能
学习迁移
领域(数学分析)
特征(语言学)
标记数据
模式识别(心理学)
点(几何)
图像分割
一般化
计算机视觉
机器学习
数学分析
语言学
哲学
几何学
数学
作者
Shuo Shen,Yan Xia,Andreas Eich,Yusheng Xu,Bisheng Yang,Uwe Stilla
标识
DOI:10.1109/lgrs.2023.3294748
摘要
3D point cloud semantic segmentation plays an essential role in fine-grained scene understanding from photogrammetry to autonomous driving. Although recent efforts have been made to push the 3D semantic segmentation forward, many solutions cannot generalize well to new data with different sensor configurations. For example, when transferring the segmentation model learned from terrestrial laser scanning (TLS) data to mobile laser scanning (MLS) data, the performance drops dramatically. Besides, rich-labeled data is usually required. However, labeling point cloud data is time-consuming and label-intensive in practice. In light of this, we propose SegTrans, an unsupervised domain adaption method for the point cloud semantic segmentation task, which largely improves the generalization performance from one labeled dataset (source domain) to another unlabeled dataset (target domain). Specifically, we first introduce a data selection module (DSM) to tackle the discrepancy between different datasets at the data level. Then an adversarial learning module (ALM) with an adversarial loss is iteratively implemented to align the domain-specific feature in both the source and target domains, which only consists of two fully connected layers. Experiments show the overall accuracy of the proposed method achieves 88% OA on the TUM City Campus dataset (MLS dataset) when trained on the Semantic3D dataset (TLS dataset).
科研通智能强力驱动
Strongly Powered by AbleSci AI