高光谱成像
人工智能
图像分割
模式识别(心理学)
计算机科学
图像融合
分割
计算机视觉
图像处理
融合
上下文图像分类
图像(数学)
语言学
哲学
作者
Hongmin Gao,Runhua Sheng,Yuanchao Su,Zhonghao Chen,Shufang Xu,Lianru Gao
出处
期刊:PubMed
日期:2025-09-23
卷期号:PP
标识
DOI:10.1109/tip.2025.3611146
摘要
Convolution Neural Networks (CNNs) have demonstrated strong feature extraction capabilities in Euclidean spaces, achieving remarkable success in hyperspectral image (HSI) classification tasks. Meanwhile, Graph convolution networks (GCNs) effectively capture spatial-contextual characteristics by leveraging correlations in non-Euclidean spaces, uncovering hidden relationships to enhance the performance of HSI classification (HSIC). Methods combining GCNs with CNNs have achieved excellent results. However, existing GCN methods primarily rely on single-scale graph structures, limiting their ability to extract features across different spatial ranges. To address this issue, this paper proposes a multiscale segmentation-guided fusion network (MS2FN) for HSIC. This method constructs pixel-level graph structures based on multiscale segmentation data, enabling the GCN to extract features across various spatial ranges. Moreover, effectively utilizing features extracted from different spatial scales is crucial for improving classification performance. This paper adopts distinct processing strategies for different feature types to enhance feature representation. Comparative experiments demonstrate that the proposed method outperforms several state-of-the-art (SOTA) approaches in accuracy. The source code will be released at https://github.com/shengrunhua/MS2FN.
科研通智能强力驱动
Strongly Powered by AbleSci AI