高光谱成像
计算机科学
人工智能
模式识别(心理学)
卷积神经网络
人工神经网络
上下文图像分类
新知识检测
预处理器
变压器
特征提取
卷积(计算机科学)
深度学习
过程(计算)
图像处理
核(代数)
数据处理
数据建模
光谱特征
网络体系结构
维数(图论)
机器学习
图像分辨率
数据挖掘
降维
信号处理
作者
Felipe Viel,Renato Cotrim Maciel,Laio Oriel Seman,Cesar Albenes Zeferino,Eduardo Augusto Bezerra,Valderi Reis Quietinho Leithardt
出处
期刊:IEEE Access
[Institute of Electrical and Electronics Engineers]
日期:2023-01-01
卷期号:11: 24835-24850
被引量:43
标识
DOI:10.1109/access.2023.3255164
摘要
Hyperspectral images contain tens to hundreds of bands, implying a high spectral resolution.
\nThis high spectral resolution allows for obtaining a precise signature of structures and compounds that
\nmake up the captured scene. Among the types of processing that may be applied to Hyperspectral Images, classification using machine learning models stands out. The classification process is one of the most relevant steps for this type of image. It can extract information using spatial and spectral information and spatial-spectral fusion. Artificial Neural Network models have been gaining prominence among existing classification techniques. They can be applied to data with one, two, or three dimensions. Given the above, this work evaluates Convolutional Neural Network models with one, two, and three dimensions to identify the impact of classifying Hyperspectral Images with different types of convolution. We also expand the comparison to Recurrent Neural Network models, Attention Mechanism, and the Transformer architecture. Furthermore, a novelty pre-processing method is proposed for the classification process to avoid generating data leaks between training, validation, and testing data. The results demonstrated that using 1 Dimension Convolutional Neural Network (1D-CNN), Long Short-Term Memory (LSTM), and Transformer architectures reduces memory consumption and sample processing time and maintain a satisfactory classification performance up to 99% accuracy on larger datasets. In addition, the Transfomer architecture can approach the 2D-CNN and 3D-CNN architectures in accuracy using only spectral information. The results also show that using two or three dimensions convolution layers improves accuracy at the cost of greater memory consumption and processing time per sample. Furthermore, the pre-processing methodology guarantees the disassociation of training and testing data.
科研通智能强力驱动
Strongly Powered by AbleSci AI