计算机科学
规范化(社会学)
人工智能
特征提取
卷积(计算机科学)
模式识别(心理学)
特征(语言学)
计算机视觉
空间分析
遥感
人工神经网络
地理
人类学
哲学
语言学
社会学
作者
Chenlu Hu,Mengting Ma,Xiaowen Ma,Huanting Zhang,Dun Wu,Guang R. Gao,Wei Zhang
标识
DOI:10.1109/icip49359.2023.10222878
摘要
Spatiotemporal fusion aims to generate remote sensing images with high spatial and temporal resolutions. Conventional spatiotemporal fusion methods usually use convolution operations for feature extraction, which limits their capability of capturing long-range dependencies. Meanwhile, the significant difference of spatial resolutions of images brings great difficulty to the reconstruction of detailed textures. To address these issues, we propose a GAN-based multi-stage spatiotemporal adaptive network (STANet) for remote sensing images using temporal feature refinement and spatial texture transfer. In particular, we design a temporal interaction module (TIM) to extract useful information on surface changes over time, using a cross-temporal gating mechanism that emphasizes feature changes throughout the task. We employ the adaptive instance normalization (AdaIN) layers to learn the global spatial correlation via texture transfer from the fine image to the coarse image. Experiments on two datasets show that the proposed method outperforms other state-of-the-art methods in several metrics.
科研通智能强力驱动
Strongly Powered by AbleSci AI