计算机科学
人工智能
基本事实
模式识别(心理学)
变更检测
特征提取
图形
图像翻译
特征学习
自编码
深度学习
像素
目标检测
计算机视觉
图像(数学)
理论计算机科学
作者
Meng Jia,Cheng Zhang,Zhiqiang Zhao,Lei Wang
出处
期刊:IEEE Transactions on Geoscience and Remote Sensing
[Institute of Electrical and Electronics Engineers]
日期:2022-01-01
卷期号:60: 1-15
被引量:2
标识
DOI:10.1109/tgrs.2022.3190504
摘要
Detecting land cover change is an essential task in very-high-spatial-resolution (VHR) remote sensing applications. However, because VHR images can capture the details of ground objects, the scenes of VHR images are usually complex. For example, VHR images usually show distinct appearances or features of the same object, aroused by noise, climate conditions, imaging angles, etc. To address this issue, this paper proposes a novel unsupervised approach named bipartite graph attention autoencoders (BGAAE) for VHR image change detection. BGAAE, a further improved way of using dual convolutional autoencoders based on the architecture of image translation, equips the encoder layers with a graph attention mechanism (GAM). To generate an effective difference image, it consists of two additional loss terms: the domain correlation and semantic consistency losses, in addition to the reconstruction loss. The domain correlation loss is designed based on the encoder layers, aiming to enforce the spatial alignment of deep feature representations of the unchanged objects and mitigate the influence of pixel changes on the learning objective. The semantic consistency loss focuses on ensuring the semantic feature consistency of the bitemporal images after transcoding and allows for more flexible transformations. Experimental results on four VHR image datasets demonstrate the superiority of the proposed method.
科研通智能强力驱动
Strongly Powered by AbleSci AI