红外线的
图像融合
分解
人工智能
计算机视觉
噪音(视频)
融合
可见光谱
计算机科学
图像(数学)
材料科学
光学
物理
光电子学
化学
语言学
哲学
有机化学
作者
Jingxue Huang,Xiaosong Li,Haishu Tan,Lemiao Yang,Wang Gao,Yi Peng
出处
期刊:Measurement
[Elsevier]
日期:2024-06-09
卷期号:237: 115092-115092
被引量:18
标识
DOI:10.1016/j.measurement.2024.115092
摘要
Infrared and visible image fusion integrates useful information from different modal images to generate one with comprehensive details and highlighted targets, thereby deepening the interpretation of the scene. However, existing deep-learning based methods do not consider noise, leading to suboptimal noisy fusion results. To address this issue, we propose a decomposition-driven neural network (DeDNet) to achieve joint fusion and noise removal. By introducing constraints between the fused and ground truth source images into the loss function, we develop an autoencoder as the basic fusion and denoising network. Furthermore, we propose a decomposition network that guided the decomposition of the fusion result, improving the denoising and details recovery. Experiments demonstrate DeDNet excels the state-of-the-art methods in objective and subjective evaluations, yielding competing performance in detection and segmentation. On the metrics,Qcb, EN, SSIM, PSNR, and CC, DeDNet average increased 10.92%, 21.13%, 82.97%, 8.55%, and 16.26% than the compared methods, respectively.
科研通智能强力驱动
Strongly Powered by AbleSci AI