已入深夜,您辛苦了!由于当前在线用户较少,发布求助请尽量完整的填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!祝你早点完成任务,早点休息,好梦!

Extended Vision Transformer (ExViT) for Land Use and Land Cover Classification: A Multimodal Deep Learning Framework

计算机科学 人工智能 深度学习 地球观测 高光谱成像 卷积神经网络 合成孔径雷达 模态(人机交互) 土地覆盖 判别式 机器学习 模式识别(心理学) 土地利用 工程类 土木工程 航空航天工程 卫星
作者
Jing Yao,Bing Zhang,Chenyu Li,Danfeng Hong,Jocelyn Chanussot
出处
期刊:IEEE Transactions on Geoscience and Remote Sensing [Institute of Electrical and Electronics Engineers]
卷期号:61: 1-15 被引量:174
标识
DOI:10.1109/tgrs.2023.3284671
摘要

The recent success of attention mechanism-driven deep models, like Vision Transformer (ViT) as one of the most representative, has intrigued a wave of advanced research to explore their adaptation to broader domains. However, current Transformer-based approaches in the remote sensing (RS) community pay more attention to single-modality data, which might lose expandability in making full use of the ever-growing multimodal Earth observation data. To this end, we propose a novel multimodal deep learning framework by extending conventional ViT with minimal modifications, abbreviated as ExViT, aiming at the task of land use and land cover classification. Unlike common stems that adopt either linear patch projection or deep regional embedder, our approach processes multimodal RS image patches with parallel branches of position-shared ViTs extended with separable convolution modules, which offers an economical solution to leverage both spatial and modality-specific channel information. Furthermore, to promote information exchange across heterogeneous modalities, their tokenized embeddings are then fused through a cross-modality attention module by exploiting pixel-level spatial correlation in RS scenes. Both of these modifications significantly improve the discriminative ability of classification tokens in each modality and thus further performance increase can be finally attained by a full tokens-based decision-level fusion module. We conduct extensive experiments on two multimodal RS benchmark datasets, i.e., the Houston2013 dataset containing hyperspectral and light detection and ranging (LiDAR) data, and Berlin dataset with hyperspectral and synthetic aperture radar (SAR) data, to demonstrate that our ExViT outperforms concurrent competitors based on Transformer or convolutional neural network (CNN) backbones, in addition to several competitive machine learning-based models. The source codes and investigated datasets of this work will be made publicly available at https://github.com/jingyao16/ExViT.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
小黎快看完成签到 ,获得积分10
2秒前
王晓龙完成签到,获得积分10
5秒前
WangShIbei完成签到,获得积分10
8秒前
慕青应助sci来来来采纳,获得10
11秒前
12秒前
kyrie发布了新的文献求助10
14秒前
14秒前
完美蚂蚁发布了新的文献求助10
15秒前
尾状叶完成签到 ,获得积分10
15秒前
Xiaoxiao应助可耐的毛衣采纳,获得10
19秒前
科研通AI5应助找回自己采纳,获得10
19秒前
weiwan完成签到,获得积分10
25秒前
Aaa完成签到 ,获得积分10
26秒前
26秒前
巫马炎彬完成签到 ,获得积分10
28秒前
29秒前
fduqyy发布了新的文献求助30
29秒前
29秒前
31秒前
sci来来来发布了新的文献求助10
31秒前
34秒前
非泥完成签到,获得积分10
34秒前
沈惠映完成签到 ,获得积分10
35秒前
揪揪儿发布了新的文献求助10
35秒前
35秒前
36秒前
37秒前
星辰大海应助lyy采纳,获得10
39秒前
搞怪向秋发布了新的文献求助10
40秒前
Xxxuan发布了新的文献求助10
42秒前
找回自己发布了新的文献求助10
42秒前
悦耳代亦完成签到 ,获得积分10
44秒前
岂曰无衣完成签到 ,获得积分10
45秒前
46秒前
50秒前
爱撒娇的沛凝完成签到 ,获得积分10
51秒前
顾矜应助juanjuan采纳,获得10
56秒前
善良的剑通应助Xxxuan采纳,获得10
56秒前
搞怪向秋完成签到,获得积分10
58秒前
赘婿应助sci来来来采纳,获得30
58秒前
高分求助中
【此为提示信息,请勿应助】请按要求发布求助,避免被关 20000
Continuum Thermodynamics and Material Modelling 2000
Encyclopedia of Geology (2nd Edition) 2000
105th Edition CRC Handbook of Chemistry and Physics 1600
Maneuvering of a Damaged Navy Combatant 650
Периодизация спортивной тренировки. Общая теория и её практическое применение 310
Mixing the elements of mass customisation 300
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 物理 生物化学 纳米技术 计算机科学 化学工程 内科学 复合材料 物理化学 电极 遗传学 量子力学 基因 冶金 催化作用
热门帖子
关注 科研通微信公众号,转发送积分 3778969
求助须知:如何正确求助?哪些是违规求助? 3324680
关于积分的说明 10219248
捐赠科研通 3039653
什么是DOI,文献DOI怎么找? 1668358
邀请新用户注册赠送积分活动 798646
科研通“疑难数据库(出版商)”最低求助积分说明 758467