A General Spatial-Frequency Learning Framework for Multimodal Image Fusion

人工智能 计算机科学 计算机视觉 频域 卷积(计算机科学) 领域(数学分析) 空间分析 锐化 模式识别(心理学) 空间频率 人工神经网络 数学 统计 光学 物理 数学分析
作者
Man Zhou,Jie Huang,Keyu Yan,Danfeng Hong,Xiuping Jia,Jocelyn Chanussot,Chongyi Li
出处
期刊:IEEE Transactions on Pattern Analysis and Machine Intelligence [IEEE Computer Society]
卷期号:: 1-18 被引量:43
标识
DOI:10.1109/tpami.2024.3368112
摘要

multimodal image fusion involves tasks like pan-sharpening and depth super-resolution. Both tasks aim to generate high-resolution target images by fusing the complementary information from the texture-rich guidance and low-resolution target counterparts. They are inborn with reconstructing high-frequency information. Despite their inherent frequency domain connection, most existing methods only operate solely in the spatial domain and rarely explore the solutions in the frequency domain. This study addresses this limitation by proposing solutions in both the spatial and frequency domains. We devise a Spatial-Frequency Information Integration Network, abbreviated as SFINet for this purpose. The SFINet includes a core module tailored for image fusion. This module consists of three key components: a spatial-domain information branch, a frequency-domain information branch, and a dual-domain interaction. The spatial-domain information branch employs the spatial convolution-equipped invertible neural operators to integrate local information from different modalities in the spatial domain. Meanwhile, the frequency-domain information branch adopts a modality-aware deep Fourier transformation to capture the image-wide receptive field for exploring global contextual information. In addition, the dual-domain interaction facilitates information flow and the learning of complementary representations. We further present an improved version of SFINet, SFINet++, that enhances the representation of spatial information by replacing the basic convolution unit in the original spatial domain branch with the information-lossless invertible neural operator. We conduct extensive experiments to validate the effectiveness of the proposed networks and demonstrate their outstanding performance against state-of-the-art methods in two representative multimodal image fusion tasks: pan-sharpening and depth super-resolution. The source code is publicly available at https://github.com/manman1995/Awaresome-pansharpening .
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
刚刚
量子星尘发布了新的文献求助100
1秒前
心灵美的虔纹完成签到 ,获得积分10
1秒前
1秒前
2秒前
2秒前
2秒前
2秒前
2秒前
3秒前
zkb发布了新的文献求助10
3秒前
4秒前
吴壹凡发布了新的文献求助10
4秒前
Jian完成签到,获得积分10
4秒前
所所应助昏睡的绿海采纳,获得10
4秒前
5秒前
小梁砖家发布了新的文献求助10
5秒前
科研通AI5应助活力的雨采纳,获得30
5秒前
绿狗玩偶发布了新的文献求助10
6秒前
6秒前
AlienU发布了新的文献求助10
6秒前
orixero应助danly采纳,获得30
7秒前
无花果应助高高一鸣采纳,获得10
7秒前
bkagyin应助xi采纳,获得10
7秒前
12发布了新的文献求助10
8秒前
辣椒油发布了新的文献求助10
8秒前
BYQ发布了新的文献求助10
8秒前
yanyan发布了新的文献求助10
8秒前
8秒前
科研通AI6应助欢呼的飞荷采纳,获得20
9秒前
一一应助sanyecai采纳,获得10
10秒前
10秒前
量子星尘发布了新的文献求助50
11秒前
11秒前
可爱的函函应助李洪浪采纳,获得10
11秒前
杏花饼完成签到,获得积分10
11秒前
12秒前
13秒前
量子星尘发布了新的文献求助10
14秒前
14秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Nuclear Fuel Behaviour under RIA Conditions 500
Sociologies et cosmopolitisme méthodologique 400
Why America Can't Retrench (And How it Might) 400
Another look at Archaeopteryx as the oldest bird 390
Higher taxa of Basidiomycetes 300
Partial Least Squares Structural Equation Modeling (PLS-SEM) using SmartPLS 3.0 300
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 纳米技术 计算机科学 内科学 化学工程 复合材料 物理化学 基因 催化作用 遗传学 冶金 电极 光电子学
热门帖子
关注 科研通微信公众号,转发送积分 4664193
求助须知:如何正确求助?哪些是违规求助? 4045670
关于积分的说明 12513987
捐赠科研通 3738198
什么是DOI,文献DOI怎么找? 2064446
邀请新用户注册赠送积分活动 1094017
科研通“疑难数据库(出版商)”最低求助积分说明 974564