融合
图像融合
红外线的
领域(数学分析)
频域
人工智能
计算机科学
计算机视觉
图像(数学)
物理
数学
光学
语言学
数学分析
哲学
作者
Kun Hu,Qingle Zhang,Maoxun Yuan,Yitian Zhang
出处
期刊:Frontiers in artificial intelligence and applications
日期:2024-10-16
被引量:8
摘要
Infrared and visible image fusion aims to utilize the complementary information from two modalities to generate fused images with prominent targets and rich texture details. Most existing algorithms only perform pixel-level or feature-level fusion from different modalities in the spatial domain. They usually overlook the information in the frequency domain, and some of them suffer from inefficiency due to excessively complex structures. To tackle these challenges, this paper proposes an efficient Spatial-Frequency Domain Fusion (SFDFusion) network for infrared and visible image fusion. First, we propose a Dual-Modality Refinement Module (DMRM) to extract complementary information. This module extracts useful information from both the infrared and visible modalities in the spatial domain and enhances fine-grained spatial details. Next, to introduce frequency domain information, we construct a Frequency Domain Fusion Module (FDFM) that transforms the spatial domain to the frequency domain through Fast Fourier Transform (FFT) and then integrates frequency domain information. Additionally, we design a frequency domain fusion loss to provide guidance for the fusion process. Extensive experiments on public datasets demonstrate that our method produces fused images with significant advantages in various fusion metrics and visual effects. Furthermore, our method demonstrates high efficiency in image fusion and good performance on downstream detection tasks, thereby satisfying the real-time demands of advanced visual tasks. The code is available at https://github.com/lqz2/SFDFusion.
科研通智能强力驱动
Strongly Powered by AbleSci AI