判别式
计算机科学
人工智能
土地覆盖
特征(语言学)
模式识别(心理学)
上下文图像分类
特征提取
监督学习
特征学习
机器学习
遥感
图像(数学)
人工神经网络
土地利用
地质学
工程类
哲学
土木工程
语言学
作者
Zhixiang Xue,YU Xuchu,Anzhu Yu,Bing Liu,Pengqiang Zhang,Shentong Wu
标识
DOI:10.1109/tgrs.2022.3190466
摘要
Deep learning models have shown great potential in remote sensing image processing and analysis. Nevertheless, there are insufficient labeled samples to train deep networks, which seriously affects the performance of these models. To resolve this contradiction, we propose a generative self-supervised feature learning (S2FL) architecture for multimodal remote sensing image land cover classification. Specifically, multiple complementary observed views are constructed from multimodal remote sensing images, which are employed for following generative self-supervised learning. The proposed S2FL architecture is capable of extracting high-level meaningful feature representations from multiview data, and this process does not require any labeled information, providing a feasible solution to relieve the urgent need for annotated samples. The learned features are normalized and merged with corresponding spectral information to further improve the discriminative capability of feature representations, and we utilize these fused features for land cover classification. Compared with existing supervised, semi-supervised, and self-supervised approaches, the proposed generative self-supervised model achieves superior performance in terms of feature learning and land cover classification, especially in the small sample classification case.
科研通智能强力驱动
Strongly Powered by AbleSci AI