计算机科学
传感器融合
遥感
融合
人工智能
地质学
语言学
哲学
作者
Yuxing Chen,Maofan Zhao,Lorenzo Bruzzone
标识
DOI:10.1109/tgrs.2024.3387837
摘要
The mechanism of connecting multimodal signals through self-attention operation is a key factor in the success of multimodal Transformer networks in remote sensing data fusion tasks. However, traditional approaches assume access to all modalities during both training and inference, which can lead to severe degradation when dealing with modal-incomplete inputs in downstream applications. To address this limitation, we propose a novel approach to incomplete multimodal learning in the context of remote sensing data fusion and the multimodal Transformer. This approach can be used in both supervised and self-supervised pretraining paradigms. It leverages the additional learned fusion tokens in combination with modality attention and masked self-attention mechanisms to collect multimodal signals in a multimodal Transformer. The proposed approach employs reconstruction and contrastive loss to facilitate fusion in pretraining, while allowing for random modality combinations as inputs in network training. Experimental results show that the proposed method delivers state-of-the-art performance on two multimodal datasets for tasks, such as building instance/semantic segmentation and land-cover mapping when dealing with incomplete inputs during inference.
科研通智能强力驱动
Strongly Powered by AbleSci AI