高光谱成像
特征提取
卷积(计算机科学)
卷积神经网络
图像融合
人工智能
计算机科学
融合
遥感
模式识别(心理学)
上下文图像分类
计算机视觉
特征(语言学)
图像分辨率
人工神经网络
传感器融合
样品(材料)
概化理论
特征向量
空间分析
图像分割
支持向量机
主成分分析
深度学习
作者
Yiheng Zhang,Ziqiang Wang,Meng Huang,Ming Li,Jian Zhang,Shandong Wang,Jinglin Zhang,Heng Zhang
标识
DOI:10.1109/tgrs.2025.3608444
摘要
Convolutional neural networks (CNNs) and Transformer-based models have achieved remarkable success in hyperspectral image (HSI) classification tasks due to their outstanding ability to extract spatial and spectral features. However, most existing methods process spatial and spectral features separately, making it difficult to effectively learn their interactive features. To address this issue, we propose a spectral-spatial dual-branch fusion Transformer (S2DBFT) for HSI classification. Initially, we construct a spectral feature extraction module (SPEEM) and a spatial feature extraction module (SPAEM) to extract low-level features. These two modules consist of a one-dimensional convolution layer and a two-dimensional convolution layer, respectively, performing shallow extraction of spectral and spatial features. Next, the two feature sets obtained are fused through a weighted fusion process. Additionally, we design a multi-head spectral-spatial self-attention (MHS3A) mechanism to enhance the interactive fusion of spectral and spatial features. Upon completion of feature fusion, a linear layer is used to obtain the sample labels. Extensive experiments on four HSI datasets demonstrate the effectiveness of the proposed S2DBFT, compared to existing state-of-the-art methods. In terms of performance evaluation, the overall accuracy and average accuracy indicate the superiority and generalizability of S2DBFT.
科研通智能强力驱动
Strongly Powered by AbleSci AI