计算机科学
人工智能
代表(政治)
特征(语言学)
背景(考古学)
双线性插值
模式识别(心理学)
高光谱成像
传感器融合
特征提取
遥感
计算机视觉
古生物学
语言学
政治学
法学
生物
地质学
哲学
政治
作者
Xue Song,Lingling Li,Licheng Jiao,Fang Liu,Xu Liu,Shuyuan Yang
出处
期刊:IEEE Transactions on Geoscience and Remote Sensing
[Institute of Electrical and Electronics Engineers]
日期:2023-01-01
卷期号:61: 1-17
标识
DOI:10.1109/tgrs.2023.3336771
摘要
The complementary and heterogeneous properties fusion of multimodal data (such as hyperspectral, lidar, and synthetic aperture radar data) can significantly improve the accuracy of remote sensing (RS) images joint classification. Thus, we propose a spatial-spectral bilinear representation fusion network (S 2 BRFNet), which captures long-range dependencies cross-modality and within the same modality to achieve the final joint classification. Firstly, a cross-modal spatial-spectral representation module (S 2 RM) is designed, it utilizes spatial-spectral attention and self-attention between heterogeneous data to enhance the characterization capabilities of cross-modal complementary properties and spatial-spectral features of single-source data. Secondly, a semantic space-guided bilinear feature fusion module (S 2 BFM) is developed, which uses deep and shallow features to regain fine-grained features. It uses shallow location details to improve the semantic prediction of deep features. Furthermore, it uses the different representation capabilities of different layers for objects with obvious feature differences to enhance the feature advantages. Therefore, rich global context information is obtained. Finally, the semantic space re-weight strategy is used to guide the outer product fusion of heterogeneous features, which enhances the ability of the network to identify similar features. Classification experiments are carried out on four common datasets of different modality combinations (HS-SAR-DSM Augsburg, Berlin, Trento, and Muufl), and this can prove the superiority of the S 2 BRFNet.
科研通智能强力驱动
Strongly Powered by AbleSci AI