人工智能
计算机科学
比例(比率)
注释
水准点(测量)
自然语言处理
机器学习
地图学
地理
作者
Yi Wang,Nassim Ait Ali Braham,Zhitong Xiong,Chenying Liu,Conrad M Albrecht,Xiao Xiang Zhu
标识
DOI:10.1109/mgrs.2023.3281651
摘要
Self-supervised pretraining bears the potential to generate expressive representations from large-scale Earth observation (EO) data without human annotation. However, most existing pretraining in the field is based on ImageNet or medium-sized, labeled remote sensing (RS) datasets. In this article, we share an unlabeled dataset Self-Supervised Learning for Earth Observation-Sentinel-1/2 ( SSL4EO - S12 ) to assemble a large-scale, global, multimodal, and multiseasonal corpus of satellite imagery. We demonstrate SSL4EO-S12 to succeed in self-supervised pretraining for a set of representative methods: momentum contrast (MoCo), self-distillation with no labels (DINO), masked autoencoders (MAE), and data2vec, and multiple downstream applications, including scene classification, semantic segmentation, and change detection. Our benchmark results prove the effectiveness of SSL4EO-S12 compared to existing datasets. The dataset, related source code, and pretrained models are available at https://github.com/zhu-xlab/SSL4EO-S12 .
科研通智能强力驱动
Strongly Powered by AbleSci AI