高光谱成像
迭代重建
计算机视觉
全光谱成像
人工智能
计算机科学
遥感
地质学
出处
期刊:IEEE transactions on computational imaging
日期:2025-01-01
卷期号:11: 625-637
被引量:5
标识
DOI:10.1109/tci.2025.3564776
摘要
Deep unfolding framework has witnessed remarkable progress for hyperspectral image (HSI) reconstruction benefitting from advanced consolidation of the imaging model-driven and data-driven approaches, which are generally realized with the data reconstruction error term and the prior learning network. However, current methods still encounter challenges related to insufficient generalization and representation for the high-dimensional HSI data, manifesting in two key aspects: 1) assumption of the fixed sensing mask causing low generalization for reconstruction of the compressive measurements out of distribution; 2) imperfect prior representation network for the high-dimensional data in both spatial and spectral domains. To overcome the aforementioned issues, this study presents a high-generalized deep unfolding model using coupled spatial-spectral transformer (CS2Tr) for prior learning. Specifically, to improve the generalization capability, we synthesize the training samples with diverse masks to learn the unfolding model, and propose a mask guided-data modeling module for being incorporated with both data reconstruction term and prior learning network for degradation-aware updating and representation context modeling. To achieve robust prior representation, a coupled spatial-spectral transformer aiming at modeling both nonlocal spatial and spectral dependencies is introduced for capturing the 3D attributes of HSI. Moreover, we conduct the feature interaction among stages to capture rich and diverse contexts, and leverage the auxiliary losses on all stages for enhancing the recovery capability of each individual step. Extensive experiments on both simulated and real scenes have demonstrated that our proposed method outperforms the state-of-the-art HSI reconstruction approaches
科研通智能强力驱动
Strongly Powered by AbleSci AI