计算机科学
人工智能
点云
计算机视觉
样品(材料)
云计算
模式识别(心理学)
色谱法
操作系统
化学
作者
Jianan Li,Jie Wang,Junjie Chen,Tingfa Xu
标识
DOI:10.1109/tpami.2025.3528392
摘要
Robust 3D perception amidst corruption is a crucial task in the realm of 3D vision. Conventional data augmentation methods aimed at enhancing corruption robustness typically apply random transformations to all point cloud samples offline, neglecting sample structure, which often leads to over- or under-enhancement. In this study, we propose an alternative approach to address this issue by employing sample-adaptive transformations based on sample structure, through an auto-augmentation framework named AdaptPoint++. Central to this framework is an imitator, which initiates with Position-aware Feature Extraction to derive intrinsic structural information from the input sample. Subsequently, a Deformation Controller and a Mask Controller predict per-anchor deformation and per-point masking parameters, respectively, facilitating corruption simulations. In conjunction with the imitator, a discriminator is employed to curb the generation of excessive corruption that deviates from the original data distribution. Moreover, we integrate a perception-guidance feedback mechanism to steer the generation of samples towards an appropriate difficulty level. To effectively train the classifier using the generated augmented samples, we introduce a Structure Reconstruction-assisted learning mechanism, bolstering the classifier's robustness by prioritizing intrinsic structural characteristics over superficial discrepancies induced by corruption. Additionally, to alleviate the scarcity of real-world corrupted point cloud data, we introduce two novel datasets: ScanObjectNN-C and MVPNET-C, closely resembling actual data in real-world scenarios. Experimental results demonstrate that our method attains state-of-the-art performance on multiple corruption benchmarks.
科研通智能强力驱动
Strongly Powered by AbleSci AI