计算机科学
红外线的
图像融合
计算机视觉
人工智能
图像(数学)
光学
物理
作者
Quanquan Xiao,Haiyan Jin,Haonan Su,Yuanlin Zhang,Zhaolin Xiao,Bin Wang
标识
DOI:10.1109/tmm.2024.3521848
摘要
Infrared and visible image fusion is currently an important research direction in the field of multimodal image fusion, which aims to utilize the complementary information between infrared images and visible images to generate a new image containing richer information. In recent years, many deep learning-based methods for infrared and visible image fusion have emerged.However, most of these approaches ignore the importance of semantic information in image fusion, resulting in the generation of fused images that do not perform well enough in human visual perception and advanced visual tasks.To address this problem, we propose a semantic prior knowledge-driven infrared and visible image fusion method. The method utilizes a pre-trained semantic segmentation model to acquire semantic information of infrared and visible images, and drives the fusion process of infrared and visible images through semantic feature perception module and semantic feature embedding module.Meanwhile, we divide the fused image into each category block and consider them as components, and utilize the regional semantic adversarial loss to enhance the adversarial network generation ability in different regions, thus improving the quality of the fused image.Through extensive experiments on widely used datasets, the results show that our approach outperforms current leading algorithms in both human eye visualization and advanced visual tasks.
科研通智能强力驱动
Strongly Powered by AbleSci AI