亲爱的研友该休息了!由于当前在线用户较少,发布求助请尽量完整地填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!身体可是革命的本钱,早点休息,好梦!

Disentangled Cross-Modal Transformer for RGB-D Salient Object Detection and Beyond

人工智能 情态动词 变压器 计算机科学 RGB颜色模型 模式识别(心理学) 工程类 电压 化学 高分子化学 电气工程
作者
Hao Chen,Feihong Shen,Ding Ding,Yongjian Deng,Chao Li
出处
期刊:IEEE transactions on image processing [Institute of Electrical and Electronics Engineers]
卷期号:33: 1699-1709 被引量:28
标识
DOI:10.1109/tip.2024.3364022
摘要

Previous multi-modal transformers for RGB-D salient object detection (SOD) generally directly connect all patches from two modalities to model cross-modal correlation and perform multi-modal combination without differentiation, which can lead to confusing and inefficient fusion. Instead, we disentangle the cross-modal complementarity from two views to reduce cross-modal fusion ambiguity: 1) Context disentanglement. We argue that modeling long-range dependencies across modalities as done before is uninformative due to the severe modality gap. Differently, we propose to disentangle the cross-modal complementary contexts to intra-modal self-attention to explore global complementary understanding, and spatial-aligned inter-modal attention to capture local cross-modal correlations, respectively. 2) Representation disentanglement. Unlike previous undifferentiated combination of cross-modal representations, we find that cross-modal cues complement each other by enhancing common discriminative regions and mutually supplement modal-specific highlights. On top of this, we divide the tokens into consistent and private ones in the channel dimension to disentangle the multi-modal integration path and explicitly boost two complementary ways. By progressively propagate this strategy across layers, the proposed Disentangled Feature Pyramid module (DFP) enables informative cross-modal cross-level integration and better fusion adaptivity. Comprehensive experiments on a large variety of public datasets verify the efficacy of our context and representation disentanglement and the consistent improvement over state-of-the-art models. Additionally, our cross-modal attention hierarchy can be plug-and-play for different backbone architectures (both transformer and CNN) and downstream tasks, and experiments on a CNN-based model and RGB-D semantic segmentation verify this generalization ability.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
麻辣薯条完成签到,获得积分10
刚刚
时尚身影完成签到,获得积分10
3秒前
流苏完成签到,获得积分0
7秒前
流苏2完成签到,获得积分10
10秒前
科研通AI2S应助科研通管家采纳,获得10
11秒前
ceeray23应助科研通管家采纳,获得10
11秒前
Akim应助科研通管家采纳,获得10
11秒前
18秒前
19秒前
邬美杰发布了新的文献求助10
24秒前
矢思然完成签到,获得积分10
27秒前
41秒前
BA1完成签到,获得积分10
42秒前
梅者如西发布了新的文献求助10
45秒前
浮游应助梅者如西采纳,获得10
52秒前
科研通AI6应助梅者如西采纳,获得10
52秒前
56秒前
8464368完成签到,获得积分10
57秒前
答辩完成签到 ,获得积分10
58秒前
1分钟前
1分钟前
1分钟前
fml完成签到,获得积分10
1分钟前
辣辣完成签到,获得积分10
1分钟前
安详的面包完成签到,获得积分10
1分钟前
1分钟前
fml发布了新的文献求助10
1分钟前
1分钟前
梅者如西完成签到,获得积分10
1分钟前
1分钟前
江枫渔火完成签到 ,获得积分10
1分钟前
1分钟前
2分钟前
2分钟前
ceeray23应助科研通管家采纳,获得10
2分钟前
ceeray23应助科研通管家采纳,获得10
2分钟前
科研通AI2S应助科研通管家采纳,获得10
2分钟前
ceeray23应助科研通管家采纳,获得10
2分钟前
2分钟前
Yuanyuan发布了新的文献求助10
2分钟前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Binary Alloy Phase Diagrams, 2nd Edition 8000
Encyclopedia of Reproduction Third Edition 3000
Comprehensive Methanol Science Production, Applications, and Emerging Technologies 2000
From Victimization to Aggression 1000
Translanguaging in Action in English-Medium Classrooms: A Resource Book for Teachers 700
Exosomes Pipeline Insight, 2025 500
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5650884
求助须知:如何正确求助?哪些是违规求助? 4781901
关于积分的说明 15052691
捐赠科研通 4809656
什么是DOI,文献DOI怎么找? 2572449
邀请新用户注册赠送积分活动 1528505
关于科研通互助平台的介绍 1487448