高光谱成像
计算机科学
遥感
测距
人工智能
激光雷达
代表(政治)
传感器融合
模式识别(心理学)
情报检索
地理
电信
政治
政治学
法学
作者
Sheng Fang,Kaiyu Li,Zhe Li
标识
DOI:10.1109/lgrs.2021.3121028
摘要
The effective utilization of multimodal data (e.g., hyperspectral and light detection and ranging (LiDAR) data) has profound implications for further development of the remote sensing (RS) field. Many studies have explored how to effectively fuse features from multiple modalities; however, few of them focus on information interactions that can effectively promote the complementary semantic content of multisource data before fusion. In this letter, we propose a spatial–spectral enhancement module (S 2 EM) for cross-modal information interaction in deep neural networks. Specifically, S 2 EM consists of SpAtial Enhancement Module (SAEM) for enhancing spatial representation of hyperspectral data by LiDAR features and SpEctral Enhancement Module (SEEM) for enhancing spectral representation of LiDAR data by hyperspectral features. A series of experiments and ablation studies on the Houston2013 dataset show that S 2 EM can effectively facilitate the interaction and understanding between multimodal data. Our source code is available at https://github.com/likyoo/Multimodal-Remote-Sensing-Toolkit , contributing to the RS community.
科研通智能强力驱动
Strongly Powered by AbleSci AI