计算机科学
人工智能
模式识别(心理学)
高光谱成像
联营
自编码
变压器
嵌入
深度学习
上下文图像分类
特征提取
机器学习
图像(数学)
工程类
电气工程
电压
作者
Ziyu Li,Zhaohui Xue,Qi Xu,Ling Zhang,Tianzhi Zhu,Mengxue Zhang
标识
DOI:10.1109/tgrs.2023.3345923
摘要
Transformer has shown great potential in extracting global features, and it can achieve better classification performance with a large number of training samples compared with other deep learning (DL) models. However, most of the existing Transformer-based models for hyperspectral image (HSI) classification simply use the multihead self-attention and channel multilayer perceptron (MLP) modules that contain many parameters to learn, resulting in poor performance in a few-shot learning scenario. To overcome the above issue, a lightweight self-pooling Transformer (SPFormer) is proposed for few-shot HSI classification. First, a one-layer autoencoder based on self-supervised learning is built to reduce the dimensionality of HSI. Second, two parameter-free modules, channel shuffle for multihead self-pooling with sparse mapping (CSSM-MHSP) and central token mixer (CTM), are proposed for mapping spectral features to higher dimensions and promoting information interaction between pixels, respectively. Third, a lightweight channel embedding is designed to extract deep spectral features. Finally, a fully connected layer is used for classification. The classification performance of the proposed method is evaluated on four benchmark datasets, showing its superiority in classification accuracy, generalization performance, and model complexity compared to existing state-of-the-art methods with limited training samples.
科研通智能强力驱动
Strongly Powered by AbleSci AI