全色胶片
高光谱成像
人工智能
计算机科学
模式识别(心理学)
图像分辨率
图像融合
融合
多分辨率分析
计算机视觉
小波
小波变换
离散小波变换
图像(数学)
语言学
哲学
作者
Xiaozheng Wang,Yong Yang,Shuying Huang,Weiguo Wan,Ziyang Liu,Long Zhang,Angela Zhao
标识
DOI:10.1109/tgrs.2025.3527568
摘要
Hyperspectral (HS) pansharpening aims to fuse high-spatial-resolution panchromatic (PAN) images with low-spatial-resolution hyperspectral (LRHS) images to generate high-spatial-resolution hyperspectral (HRHS) images. Due to the lack of consideration for the modal feature difference between PAN and LRHS images, most deep leaning-based methods suffer from spectral and spatial distortions in the fusion results. In addition, most methods use upsampled LRHS images as network input, resulting in spectral distortion. To address these issues, we propose a dual-stage feature correction fusion network (DFCFN) that achieves accurate fusion of PAN and LRHS images by constructing two fusion sub-networks: a feature correction compensation fusion network (FCCFN) and a multi-scale spectral correction fusion network (MSCFN). Based on the lattice filter structure, FCCFN is designed to obtain the initial fusion result by mutually correcting and supplementing the modal features from PAN and LRHS images. To suppress spectral distortion and obtain fine HRHS results, MSCFN based on 2D discrete wavelet transform (2D-DWT) is constructed to gradually correct the spectral features of the initial fusion result by designing a conditional entropy transformer (CE-Transformer). Extensive experiments on three widely used simulated datasets and one real dataset demonstrate that the proposed DFCFN achieves significant improvements in both spatial and spectral quality metrics over other state-of-the-art (SOTA) methods. Specifically, the proposed method improves the SAM metric by 6.4%, 6.2%, and 5.3% compared to the second-best comparison approach on Pavia center, Botswana, and Chikusei datasets, respectively. The codes are made available at: https://github.com/EchoPhD/DFCFN.
科研通智能强力驱动
Strongly Powered by AbleSci AI