红外线的
杠杆(统计)
融合
图像融合
过程(计算)
情态动词
计算机科学
人工智能
计算机视觉
模式识别(心理学)
光学
图像(数学)
材料科学
物理
语言学
哲学
高分子化学
操作系统
作者
Yang Yang,Zhennan Ren,Beichen Li,Yue Lang,Xiaoru Pan,Ruihai Li,Ming Ge
标识
DOI:10.1016/j.optlaseng.2023.107528
摘要
Because infrared images emphasize thermal targets while visible images provide a wealth of textural information, fusing the two modal images improves the human perception of the scene. When it comes to image fusion, it is essential to investigate the informative regions of both modal images and to leverage this prior knowledge during the fusion phase, in case of image degradation due to intermodal image interference. To this purpose, we present a fusion model in which the fusion process is guided by a thermal target mask, therefore alleviating cross-modal interference. This mask is created by suppressing the background of the infrared image, allowing thermal objects to be highlighted significantly, enabling for exploration of the associated regions. In addition, a two-channel multi-scale feature extraction network is designed to retain the semantic information from the two source images. We verify the presented model’s effectiveness using the public datasets, and the fused image of our method completely maintain the infrared image’s thermal targets and the visible image’s background texture information. Extensive experiments on publicly available datasets demonstrate that our model outperforms other state-of-the-art models in terms of both visual assessment and objective assessment.
科研通智能强力驱动
Strongly Powered by AbleSci AI