高光谱成像
人工智能
计算机科学
像素
模式识别(心理学)
特征提取
卷积神经网络
光谱带
变压器
遥感
物理
地质学
量子力学
电压
作者
Diling Liao,Cuiping Shi,Liguo Wang
标识
DOI:10.1109/tgrs.2023.3286950
摘要
In the past, deep learning (DL) technologies have been widely used in hyperspectral image classification tasks. Among them, convolutional neural networks (CNNs) use fixed size receptive field (RF) to obtain spectral and spatial features of hyperspectral images (HSIs), showing great feature extraction capabilities, which are one of the most popular DL frameworks. However, the convolution using local extraction and global parameter sharing mechanism pays more attention to spatial content information, which changes the spectral sequence information in the learned features. In addition, CNN is difficult to describe the long-distance correlation between HSI pixels and bands. To solve these problems, a spectral-spatial fusion Transformer network (S 2 FTNet) is proposed for the classification of hyperspectral images. Specifically, S 2 FTNet adopts the Transformer framework to build a spatial Transformer module (SpaFormer) and a spectral Transformer module (SpeFormer) to capture image spatial and spectral long-distance dependencies. In addition, an adaptive spectral-spatial fusion mechanism (AS 2 FM) is proposed to effectively fuse the obtained advanced high-level semantic features. Finally, a large number of experiments were carried out on four datasets, Indian Pines, Pavia, Salinas and WHU-Hi-LongKou, which verified that the proposed S 2 FTNet can provide better classification performance than other the state-of-the-art networks.
科研通智能强力驱动
Strongly Powered by AbleSci AI