多光谱图像
高光谱成像
人工智能
塔克分解
图像融合
模式识别(心理学)
图像分辨率
计算机视觉
特征(语言学)
计算机科学
图像(数学)
遥感
数学
张量分解
地理
哲学
语言学
张量(固有定义)
纯数学
作者
He Wang,Yang Xu,Zebin Wu,Zhihui Wei
标识
DOI:10.1109/tnnls.2024.3457781
摘要
Hyperspectral image (HSI) and multispectral image (MSI) fusion aims to generate high spectral and spatial resolution hyperspectral image (HR-HSI) by fusing high-resolution multispectral image (HR-MSI) and low-resolution hyperspectral image (LR-HSI). However, existing fusion methods encounter challenges such as unknown degradation parameters, and incomplete exploitation of the correlation between high-dimensional structures and deep image features. To overcome these issues, in this article, an unsupervised blind fusion method for LR-HSI and HR-MSI based on Tucker decomposition and spatial-spectral manifold learning (DTDNML) is proposed. We design a novel deep Tucker decomposition network that maps LR-HSI and HR-MSI into a consistent feature space, achieving reconstruction through decoders with shared parameters. To better exploit and fuse spatial-spectral features in the data, we design a core tensor fusion network (CTFN) that incorporates a spatial-spectral attention mechanism for aligning and fusing features at different scales. Furthermore, to enhance the capacity to capture global information, a Laplacian-based spatial-spectral manifold constraint is introduced in shared-decoders. Sufficient experiments have validated that this method enhances the accuracy and efficiency of hyperspectral and multispectral fusion on different remote sensing datasets. The source code is available at https://github.com/Shawn-H-Wang/DTDNML.
科研通智能强力驱动
Strongly Powered by AbleSci AI