保险丝(电气)
计算机科学
人工智能
图像融合
融合
红外线的
模式识别(心理学)
块(置换群论)
计算机视觉
GSM演进的增强数据速率
融合规则
图像(数学)
数学
物理
光学
哲学
量子力学
语言学
几何学
出处
期刊:IEEE Access
[Institute of Electrical and Electronics Engineers]
日期:2022-01-01
卷期号:10: 126117-126132
被引量:7
标识
DOI:10.1109/access.2022.3226564
摘要
Infrared and visible image fusion aims to generate more informative images of a given scene by combining multimodal images with complementary information. Although recent learning-based approaches have shown significant fusion performance, developing an effective fusion algorithm that can preserve complementary information while preventing bias toward either of the source images remains a significant challenge. In this work, we propose a multiscale progressive fusion (MPFusion) algorithm that extracts and progressively fuses multiscale features of infrared and visible images. The proposed algorithm consists of two networks, IRNet and FusionNet, which extract the intrinsic features of infrared and visible images, respectively. We transfer the multiscale information of the infrared image from IRNet to FusionNet to generate an informative fusion result. To this end, we develop the multi-dilated residual block (MDRB) and the progressive fusion block (PFB), which progressively combines the multiscale features from IRNet with those from FusionNet to fuse complementary features effectively and adaptively. Furthermore, we exploit edge-guided attention maps to preserve complementary edge information in the source images during fusion. Experimental results on several datasets demonstrate that the proposed algorithm outperforms state-of-the-art infrared and visible image fusion algorithms on both quantitative and qualitative comparisons.
科研通智能强力驱动
Strongly Powered by AbleSci AI