遥感
封面(代数)
土地覆盖
情态动词
棱锥(几何)
融合
计算机科学
图像融合
传感器融合
环境科学
地质学
人工智能
计算机视觉
地图学
土地利用
地理
图像(数学)
物理
光学
工程类
材料科学
哲学
土木工程
机械工程
高分子化学
语言学
作者
Qinghui Liu,Michael Kampffmeyer,Robert Jenssen,Arnt-Børre Salberg
标识
DOI:10.1080/01431161.2022.2098078
摘要
Multi-modality data is becoming readily available in remote sensing (RS) and can provide complementary information about the Earth’s surface. Effective fusion of multi-modal information is thus important for various applications in RS, but also very challenging due to large domain differences, noise, and redundancies. There is a lack of effective and scalable fusion techniques for bridging multiple modality encoders and fully exploiting complementary information. To this end, we propose a new multi-modality network (MultiModNet) for land cover mapping of multi-modal remote sensing data based on a novel pyramid attention fusion (PAF) module and a gated fusion unit (GFU). The PAF module is designed to efficiently obtain rich fine-grained contextual representations from each modality with a built-in cross-level and cross-view attention fusion mechanism, and the GFU module utilizes a novel gating mechanism for early merging of features, thereby diminishing hidden redundancies and noise. This enables supplementary modalities to effectively extract the most valuable and complementary information for late feature fusion. Extensive experiments on two representative RS benchmark datasets demonstrate the effectiveness, robustness, and superiority of the MultiModNet for multi-modal land cover classification.
科研通智能强力驱动
Strongly Powered by AbleSci AI