计算机科学
人工智能
深度学习
地球观测
高光谱成像
卷积神经网络
合成孔径雷达
模态(人机交互)
土地覆盖
判别式
机器学习
模式识别(心理学)
土地利用
工程类
土木工程
航空航天工程
卫星
作者
Jing Yao,Bing Zhang,Chenyu Li,Danfeng Hong,Jocelyn Chanussot
标识
DOI:10.1109/tgrs.2023.3284671
摘要
The recent success of attention mechanism-driven deep models, like Vision Transformer (ViT) as one of the most representative, has intrigued a wave of advanced research to explore their adaptation to broader domains. However, current Transformer-based approaches in the remote sensing (RS) community pay more attention to single-modality data, which might lose expandability in making full use of the ever-growing multimodal Earth observation data. To this end, we propose a novel multimodal deep learning framework by extending conventional ViT with minimal modifications, abbreviated as ExViT, aiming at the task of land use and land cover classification. Unlike common stems that adopt either linear patch projection or deep regional embedder, our approach processes multimodal RS image patches with parallel branches of position-shared ViTs extended with separable convolution modules, which offers an economical solution to leverage both spatial and modality-specific channel information. Furthermore, to promote information exchange across heterogeneous modalities, their tokenized embeddings are then fused through a cross-modality attention module by exploiting pixel-level spatial correlation in RS scenes. Both of these modifications significantly improve the discriminative ability of classification tokens in each modality and thus further performance increase can be finally attained by a full tokens-based decision-level fusion module. We conduct extensive experiments on two multimodal RS benchmark datasets, i.e., the Houston2013 dataset containing hyperspectral and light detection and ranging (LiDAR) data, and Berlin dataset with hyperspectral and synthetic aperture radar (SAR) data, to demonstrate that our ExViT outperforms concurrent competitors based on Transformer or convolutional neural network (CNN) backbones, in addition to several competitive machine learning-based models. The source codes and investigated datasets of this work will be made publicly available at https://github.com/jingyao16/ExViT.
科研通智能强力驱动
Strongly Powered by AbleSci AI