计算机科学
多光谱图像
全色胶片
人工智能
图像融合
图像分辨率
稳健性(进化)
计算机视觉
模式识别(心理学)
概率逻辑
融合
图像(数学)
化学
基因
生物化学
语言学
哲学
作者
Qingyan Meng,Wenxu Shi,Sijia Li,Linlin Zhang
标识
DOI:10.1109/tgrs.2023.3279864
摘要
Pansharpening is a crucial image processing technique for numerous remote sensing downstream tasks, aiming to recover high spatial resolution multispectral (HRMS) images by fusing high spatial resolution panchromatic (PAN) images and low spatial resolution multispectral (LRMS) images. Most current mainstream pansharpening fusion frameworks directly learn the mapping relationships from PAN and LRMS images to HRMS images by extracting key features. However, we propose a novel pansharpening method based on the denoising diffusion probabilistic model (DDPM) called PanDiff, which learns the data distribution of the difference maps (DM) between HRMS and interpolated MS (IMS) images from a new perspective. Specifically, PanDiff decomposes the complex fusion process of PAN and LRMS images into a multi-step Markov process, and the U-Net is employed to reconstruct each step of the process from random Gaussian noise. Notably, the PAN and LRMS images serve as the injected conditions to guide the U-Net in PanDiff, rather than being the fusion objects as in other pansharpening methods. Furthermore, we propose a modal intercalibration module (MIM) to enhance the guidance effect of the PAN and LRMS images. The experiments are conducted on a freely available benchmark dataset, including GaoFen-2, QuickBird, and WorldView-3 images. The experimental results from the fusion and generalization tests effectively demonstrate the outstanding fusion performance and high robustness of PanDiff. Fig. 1 depicts the results of the proposed method performed on various scenes. Additionally, the ablation experiments confirm the rationale behind PanDiff’s construction.
科研通智能强力驱动
Strongly Powered by AbleSci AI