计算机科学
转码
合成孔径雷达
多光谱图像
遥感
卷积神经网络
变更检测
深度学习
背景(考古学)
人工智能
特征提取
特征(语言学)
图像分辨率
计算机视觉
模式识别(心理学)
地质学
古生物学
哲学
生物
语言学
计算机网络
作者
Sudipan Saha,Francesca Bovolo,Lorenzo Bruzzone
出处
期刊:IEEE Transactions on Geoscience and Remote Sensing
[Institute of Electrical and Electronics Engineers]
日期:2021-03-01
卷期号:59 (3): 1917-1929
被引量:111
标识
DOI:10.1109/tgrs.2020.3000296
摘要
Building change detection (CD), important for its application in urban monitoring, can be performed in near real time by comparing prechange and postchange very-high-spatial-resolution (VHR) synthetic-aperture-radar (SAR) images. However, multitemporal VHR SAR images are complex as they show high spatial correlation, prone to shadows, and show an inhomogeneous signature. Spatial context needs to be taken into account to effectively detect a change in such images. Recently, convolutional-neural-network (CNN)-based transfer learning techniques have shown strong performance for CD in VHR multispectral images. However, its direct use for SAR CD is impeded by the absence of labeled SAR data and, thus, pretrained networks. To overcome this, we exploit the availability of paired unlabeled SAR and optical images to train for the suboptimal task of transcoding SAR images into optical images using a cycle-consistent generative adversarial network (CycleGAN). The CycleGAN consists of two generator networks: one for transcoding SAR images into the optical image domain and the other for projecting optical images into the SAR image domain. After unsupervised training, the generator transcoding SAR images into optical ones is used as a bitemporal deep feature extractor to extract optical-like features from bitemporal SAR images. Thus, deep change vector analysis (DCVA) and fuzzy rules can be applied to identify changed buildings (new/destroyed). We validate our method on two data sets made up of pairs of bitemporal VHR SAR images on the city of L'Aquila (Italy) and Trento (Italy).
科研通智能强力驱动
Strongly Powered by AbleSci AI