计算机科学
人工智能
模态(人机交互)
变压器
特征学习
计算机视觉
代表(政治)
模式识别(心理学)
工程类
政治学
政治
电气工程
电压
法学
作者
Yi Wang,Conrad M Albrecht,Xiao Xiang Zhu
标识
DOI:10.1109/igarss46834.2022.9883983
摘要
Self-supervised learning (SSL) has attracted much interest in remote sensing and Earth observation due to its ability to learn task-agnostic representations without human annotation. While most of the existing SSL works in remote sensing utilize ConvNet backbones and focus on a single modality, we explore the potential of vision transformers (ViTs) for joint SAR-optical representation learning. Based on DINO, a state-of-the-art SSL algorithm that distills knowledge from two augmented views of
\nan input image, we combine SAR and optical imagery by concatenating all channels to a unified input. Subsequently, we randomly mask out channels of one modality as a data augmentation strategy. While training, the model gets fed optical-only, SAR-only, and SAR-optical image pairs learning both inner- and intra-modality representations. Experimental results employing the BigEarthNet-MM dataset demonstrate
\nthe benefits of both, the ViT backbones and the proposed multimodal SSL algorithm DINO-MM.
科研通智能强力驱动
Strongly Powered by AbleSci AI