计算机科学
图像融合
稳健性(进化)
人工智能
融合
GSM演进的增强数据速率
航空影像
计算机视觉
人工神经网络
卷积神经网络
模式识别(心理学)
图像(数学)
基因
哲学
生物化学
语言学
化学
作者
Dajiang Lei,Menghao Bai,Liping Zhang,Weisheng Li
标识
DOI:10.1080/01431161.2022.2030070
摘要
Spatiotemporal fusion technology provides a feasible, economical solution for generating remote sensing images with high spatiotemporal resolution. The recently proposed learning-based method achieved high accuracy; however, its network structure is relatively simple, and the deep features of the input image cannot be obtained, so that the fused image cannot restore good landform details and the quality is not very good. Moreover, most methods use a single pixel-level (MSE) loss, which makes recovering high-frequency details difficult, resulting in a reduction in the fusion accuracy. In this paper, we propose an edge structure loss, which is added to a spatiotemporal fusion network without pre training model. To fully extract the spectral information and spatial details of the image, we propose a DenseNet-BC module for image fusion tasks, which makes the features more easily transmittable in the whole network. This improvement also enables the network to perform spatiotemporal fusion with better generalizability and robustness. In addition, we propose an edge loss to further improve the accuracy of the model fusion results. Experiments with existing spatiotemporal fusion algorithms in different regions show that our proposed method is more fault tolerant and achieve a higher accuracy in terms of quality evaluation indicators and better visual effects.
科研通智能强力驱动
Strongly Powered by AbleSci AI