图像融合
人工智能
计算机科学
融合
解码方法
像素
计算机视觉
合并(版本控制)
图像(数学)
模式识别(心理学)
特征(语言学)
算法
情报检索
语言学
哲学
作者
Jinyang Liu,Renwei Dian,Shutao Li,Haibo Li
标识
DOI:10.1016/j.inffus.2022.09.030
摘要
Pixel-level image fusion, which merges different modal images into an informative image, has attracted more and more attention. Despite many methods that have been proposed for pixel-level image fusion, there is a lack of effective image fusion methods that can simultaneously deal with different tasks. To address this problem, we propose a saliency guided deep-learning framework for pixel-level image fusion called SGFusion, which is an end-to-end fusion network and can be applied to a variety of fusion tasks by training one model. In specific, the proposed network uses the dual-guided encoding, image reconstruction decoding, and the saliency detection decoding processes to simultaneously extract the feature maps and saliency maps in different scales from the image. The saliency detection decoding is used as fusion weights to merge the features of image reconstruction decoding for generating the fusion image, which can effectively extract meaningful information from the source images and make the fusion image more in line with visual perception. Experiments indicate that the proposed fusion method achieves state-of-the-art performance in infrared and visible image fusion, multi-exposure image fusion, and medical image fusion on various public datasets.
科研通智能强力驱动
Strongly Powered by AbleSci AI