高光谱成像
计算机科学
卷积神经网络
变压器
编码器
人工智能
模式识别(心理学)
端元
自编码
深度学习
工程类
操作系统
电气工程
电压
作者
Preetam Ghosh,Swalpa Kumar Roy,Bikram Koirala,Behnood Rasti,Paul Scheunders
标识
DOI:10.1109/tgrs.2022.3196057
摘要
Transformers have intrigued the vision research community with their state-of-the-art performance in natural language processing. With their superior performance, transformers have found their way in the field of hyperspectral image classification and achieved promising results. In this article, we harness the power of transformers to conquer the task of hyperspectral unmixing and propose a novel deep neural network-based unmixing model with transformers. A transformer network captures nonlocal feature dependencies by interactions between image patches, which are not employed in CNN models, and hereby has the ability to enhance the quality of the endmember spectra and the abundance maps. The proposed model is a combination of a convolutional autoencoder and a transformer. The hyperspectral data is encoded by the convolutional encoder. The transformer captures long-range dependencies between the representations derived from the encoder. The data are reconstructed using a convolutional decoder. We applied the proposed unmixing model to three widely used unmixing datasets, i.e., Samson, Apex, and Washington DC mall and compared it with the state-of-the-art in terms of root mean squared error and spectral angle distance. The source code for the proposed model will be made publicly available at https://github.com/preetam22n/DeepTrans-HSU.
科研通智能强力驱动
Strongly Powered by AbleSci AI