计算机科学
人工智能
土地覆盖
分割
遥感
交叉口(航空)
图像分割
航空影像
特征提取
集合(抽象数据类型)
特征向量
模式识别(心理学)
上下文图像分类
计算机视觉
领域(数学分析)
特征(语言学)
图像(数学)
土地利用
数学
地图学
地理
工程类
数学分析
哲学
土木工程
语言学
程序设计语言
作者
Shunping Ji,Dingpan Wang,Muying Luo
标识
DOI:10.1109/tgrs.2020.3020804
摘要
The accuracy of remote sensing image segmentation and classification is known to dramatically decrease when the source and target images are from different sources; while deep learning-based models have boosted performance, they are only effective when trained with a large number of labeled source images that are similar to the target images. In this article, we propose a generative adversarial network (GAN) based domain adaptation for land cover classification using new target remote sensing images that are enormously different from the labeled source images. In GANs, the source and target images are fully aligned in the image space, feature space, and output space domains in two stages via adversarial learning. The source images are translated to the style of the target images, which are then used to train a fully convolutional network (FCN) for semantic segmentation to classify the land cover types of the target images. The domain adaptation and segmentation are integrated to form an end-to-end framework. The experiments that we conducted on a multisource data set covering more than 3500 km 2 with 51 560 256×256 high-resolution satellite images in Wuhan city and a cross-city data set with 11 383 256×256 aerial images in Potsdam and Vaihingen demonstrated that our method exceeded the recent GAN-based domain adaptation methods by at least 6.1% and 4.9% in the mean intersection over union (mIoU) and overall accuracy (OA) indexes, respectively. We also proved that our GAN is a generic framework that can be implemented for other domain transfer methods to boost their performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI