Extended Vision Transformer (ExViT) for Land Use and Land Cover Classification: A Multimodal Deep Learning Framework

计算机科学 人工智能 深度学习 地球观测 高光谱成像 卷积神经网络 合成孔径雷达 模态(人机交互) 土地覆盖 标杆管理 机器学习 模式识别(心理学) 土地利用 工程类 营销 土木工程 航空航天工程 业务 卫星
作者
Jing Yao,Bing Zhang,Chenyu Li,Danfeng Hong,Jocelyn Chanussot
出处
期刊:IEEE Transactions on Geoscience and Remote Sensing [Institute of Electrical and Electronics Engineers]
卷期号:61: 1-15 被引量:56
标识
DOI:10.1109/tgrs.2023.3284671
摘要

The recent success of attention mechanism-driven deep models, like Vision Transformer (ViT) as one of the most representative, has intrigued a wave of advanced research to explore their adaptation to broader domains. However, current Transformer-based approaches in the remote sensing (RS) community pay more attention to single-modality data, which might lose expandability in making full use of the ever-growing multimodal Earth observation data. To this end, we propose a novel multimodal deep learning framework by extending conventional ViT with minimal modifications, abbreviated as ExViT, aiming at the task of land use and land cover classification. Unlike common stems that adopt either linear patch projection or deep regional embedder, our approach processes multimodal RS image patches with parallel branches of position-shared ViTs extended with separable convolution modules, which offers an economical solution to leverage both spatial and modality-specific channel information. Furthermore, to promote information exchange across heterogeneous modalities, their tokenized embeddings are then fused through a cross-modality attention module by exploiting pixel-level spatial correlation in RS scenes. Both of these modifications significantly improve the discriminative ability of classification tokens in each modality and thus further performance increase can be finally attained by a full tokens-based decision-level fusion module. We conduct extensive experiments on two multimodal RS benchmark datasets, i.e., the Houston2013 dataset containing hyperspectral and light detection and ranging (LiDAR) data, and Berlin dataset with hyperspectral and synthetic aperture radar (SAR) data, to demonstrate that our ExViT outperforms concurrent competitors based on Transformer or convolutional neural network (CNN) backbones, in addition to several competitive machine learning-based models. The source codes and investigated datasets of this work will be made publicly available at https://github.com/jingyao16/ExViT.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
摆渡人发布了新的文献求助10
2秒前
1122发布了新的文献求助10
4秒前
myj完成签到 ,获得积分10
9秒前
摆渡人完成签到,获得积分10
9秒前
Lexi完成签到 ,获得积分10
10秒前
10秒前
luxiang发布了新的文献求助10
16秒前
Ryan发布了新的文献求助10
18秒前
wu完成签到 ,获得积分10
21秒前
dd完成签到 ,获得积分10
22秒前
1122完成签到,获得积分10
22秒前
甜美的瑾瑜完成签到,获得积分10
23秒前
Ryan完成签到,获得积分10
25秒前
wp发布了新的文献求助30
25秒前
26秒前
陈文学完成签到 ,获得积分10
26秒前
杨888完成签到,获得积分10
26秒前
董大米完成签到,获得积分10
27秒前
开放溪灵应助fpaper采纳,获得20
28秒前
莹123完成签到,获得积分10
29秒前
WizBLue完成签到,获得积分10
30秒前
田様应助碧蓝的谷冬采纳,获得10
32秒前
深情安青应助岩子采纳,获得10
35秒前
36秒前
领导范儿应助斯文火龙果采纳,获得10
37秒前
39秒前
Jasper应助网络药理学采纳,获得10
40秒前
在水一方应助xiaosu采纳,获得30
41秒前
公子小博发布了新的文献求助10
43秒前
小伙不错完成签到,获得积分10
44秒前
46秒前
兰月满楼完成签到 ,获得积分10
48秒前
bkagyin应助xiaosu采纳,获得30
48秒前
49秒前
50秒前
50秒前
ZXMHmio发布了新的文献求助10
51秒前
大锤哥完成签到,获得积分10
51秒前
小伙不错发布了新的文献求助30
53秒前
孙先生发布了新的文献求助10
55秒前
高分求助中
Sustainable Land Management: Strategies to Cope with the Marginalisation of Agriculture 1000
Corrosion and Oxygen Control 600
Python Programming for Linguistics and Digital Humanities: Applications for Text-Focused Fields 500
Heterocyclic Stilbene and Bibenzyl Derivatives in Liverworts: Distribution, Structures, Total Synthesis and Biological Activity 500
重庆市新能源汽车产业大数据招商指南(两链两图两池两库两平台两清单两报告) 400
Division and square root. Digit-recurrence algorithms and implementations 400
行動データの計算論モデリング 強化学習モデルを例として 400
热门求助领域 (近24小时)
化学 材料科学 医学 生物 有机化学 工程类 生物化学 纳米技术 物理 内科学 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 电极 光电子学 量子力学
热门帖子
关注 科研通微信公众号,转发送积分 2547412
求助须知:如何正确求助?哪些是违规求助? 2176233
关于积分的说明 5603131
捐赠科研通 1897016
什么是DOI,文献DOI怎么找? 946498
版权声明 565383
科研通“疑难数据库(出版商)”最低求助积分说明 503772