合成孔径雷达
计算机科学
人工智能
土地覆盖
图像融合
像素
特征(语言学)
模式识别(心理学)
遥感
计算机视觉
融合
特征提取
任务(项目管理)
光学(聚焦)
图像(数学)
土地利用
地理
物理
工程类
哲学
土木工程
光学
经济
管理
语言学
作者
Yuxing Chen,Lorenzo Bruzzone
标识
DOI:10.1109/tgrs.2021.3128072
摘要
The effective combination of the complementary information provided by huge amount of unlabeled multisensor data (e.g., synthetic aperture radar (SAR) and optical images) is a critical issue in remote sensing. Recently, contrastive learning methods have reached remarkable success in obtaining meaningful feature representations from multiview data. However, these methods only focus on image-level features, which may not satisfy the requirement for dense prediction tasks such as land-cover mapping. In this work, we propose a self-supervised framework for SAR-optical data fusion and land-cover mapping tasks. SAR and optical images are fused by using a multiview contrastive loss at image level and super-pixel level according to one of those possible strategies: in the early, intermediate, and late strategies. For the land-cover mapping task, we assign each pixel a land-cover class by the joint use of pretrained features and spectral information of the image itself. Experimental results show that the proposed approach not only achieves a comparable accuracy but also reduces the dimension of features with respect to the image-level contrastive learning method. Among three fusion strategies, the intermediate fusion strategy achieves the best performance. The combination of the pixel-level fusion approach and the self-training on spectral indices leads to further improvements in the land-cover mapping task with respect to the image-level fusion approach, especially with sparse pseudo labels. The code to reproduce our results will be found at https://github.com/yusin2it/SARoptical_fusion .
科研通智能强力驱动
Strongly Powered by AbleSci AI