高光谱成像
激光雷达
遥感
传感器融合
特征(语言学)
计算机科学
融合
人工智能
特征提取
模式识别(心理学)
地质学
语言学
哲学
作者
Changzhe Jiao,Lei Wang,Chao Hu,Xu Tang,Hao Zhu,Licheng Jiao
标识
DOI:10.1109/tgrs.2025.3603835
摘要
With the advancement of multi-modal technology, the combination of hyperspectral image (HSI) and light detection and ranging (LiDAR) data has become a prominent research topic in land use and land cover classification. There are complex correlations between HSI and LiDAR. However, many studies focus on the spectral, spatial, and other attribute features of multimodal data, while ignoring these importance relationships, thereby limiting the classification performance. To address these, we propose a dynamic common and unique feature fusion network (DCU-Net) that establishes the dependency relationship between HSI and LiDAR and mines their shared and complementary information. A multi-scale attribute feature extraction block is employed to capture spectral-spatial information of HSI and spatial-elevation information of LiDAR data, which effectively reduce the effect of scale differences among objects. In addition, we introduce a novel common-unique transformer block with cross dynamic-agent-attention to extract common features of HSI and LiDAR data and depth-wise convolution modules focusing on their unique features. By associating common and unique features of HSI and LiDAR with their dependencies, the robustness and classification accuracy of the model are significantly improved. In the fusion stage, common and unique features are adaptively reconstructed into discriminative features containing high-level semantic information. Extensive experiments on four popular HSI and LiDAR datasets illustrate the superiority and effectiveness of our proposed model, which showcase the great potential for multimodal remote sensing data analysis. The source code of the proposed method is available publicly at https://github.com/wanglei1588/Comon_Unique.
科研通智能强力驱动
Strongly Powered by AbleSci AI