遥感
土地覆盖
计算机科学
规范化(社会学)
域适应
杠杆(统计)
比例(比率)
图像分辨率
人工智能
计算机视觉
土地利用
地理
地图学
社会学
土木工程
工程类
分类器(UML)
人类学
作者
Junjue Wang,Ailong Ma,Yanfei Zhong,Zhuo Zheng,Liangpei Zhang
标识
DOI:10.1016/j.rse.2022.113058
摘要
Urban land-cover information is essential for resource allocation and sustainable urban development. Recently, deep learning algorithms have shown promising results in land-cover mapping with high spatial resolution (HSR) imagery. However, the limitation of the annotation and the divergence of the multi-sensor images always challenge the transferability of deep learning, thus hindering city-level or national-level mapping. In this paper, we propose a scheme to leverage small-scale airborne images with labels (source) for unlabeled large-scale spaceborne image (target) classification. Considering the sensor characteristics, a Cross-Sensor Land-cOVEr framework, called LoveCS, is introduced to address the difficulties of the spatial resolution inconsistency and spectral differences. As for the structural design, cross-sensor normalization is proposed to automatically learn sensor-specific normalization weights, thereby narrowing the spectral differences hierarchically. Furthermore, a dense multi-scale decoder is proposed to effectively fuse the multi-scale features from different sensors. As for the model optimization, self-training domain adaptation is adopted, and multi-scale pseudo-labeling is proposed to reduce the scale divergence brought by the spatial resolution inconsistency. The effectiveness of LoveCS was tested on data from the three cities of Nanjing, Changzhou, and Wuhan in China. The comprehensive results all show that LoveCS is superior to the existing domain adaptation methods in cross-sensor tasks, and has good generalizability. Compared with the existing land-cover products, the obtained results have the highest accuracy and spatial resolution (1.0 m). Overall, LoveCS provides a new perspective for large-scale land-cover mapping based on limited HSR images.
科研通智能强力驱动
Strongly Powered by AbleSci AI