高光谱成像
比例(比率)
计算机科学
遥感
图像融合
图像(数学)
人工智能
融合
计算机视觉
地质学
地图学
地理
语言学
哲学
标识
DOI:10.1016/j.infrared.2024.105347
摘要
Most of existing fusion methods generally directly apply both the LR-HSI and the HR-MSI as the inputs of the model for hyperspectral image fusion. However, this strategy will be difficult to fuse the two types of images well since the two type of inputs contain different features with significant differences in the scale. In this paper, we propose an efficient adaptive multi-scale input network (AMSIN) for HSI fusion. The proposed AMSIN contains adaptive multi-scale cross-mode input module, feature extraction&fusion module and image reconstruction module. In the adaptive multi-scale cross-mode input module, we design an adaptive multi-scale model to generate multiple input features with different scales to fully capture the multi-scale characteristics. Moreover, we introduce the cross-mode message insertion strategy to assign the two types of input features in suitable locations. In the feature extraction&fusion module, we fuse the multi-scale feature obtained from the adaptive multi-scale cross-mode input module by using the multi-scale spatial–spectral fusion block (MSSFB) to enhance the fusion performance. Then the HSI with high spatial resolution and spectral resolution is generated in the image reconstruction module. Extensive experiments on three datasets of Pavia University, Pavia Centre and Botswana demonstrate that the proposed AMSIN surpasses other nine state-of-the-art methods and performs best both in objective and subjective evaluations.
科研通智能强力驱动
Strongly Powered by AbleSci AI