融合
人工智能
计算机科学
自编码
图像融合
保险丝(电气)
计算机视觉
过程(计算)
模式识别(心理学)
编码器
特征(语言学)
特征提取
深度学习
图像(数学)
工程类
电气工程
哲学
操作系统
语言学
作者
Qiang Fu,Hanxiang Fu,Yuezhou Wu
出处
期刊:Electronics
[Multidisciplinary Digital Publishing Institute]
日期:2023-10-19
卷期号:12 (20): 4342-4342
被引量:4
标识
DOI:10.3390/electronics12204342
摘要
Both single infrared and visible images have respective limitations. Fusion technology has been developed to conquer these restrictions. It is designed to generate a fused image with infrared information and texture details. Most traditional fusion methods use hand-designed fusion strategies, but some are too rough and have limited fusion performance. Recently, some researchers have proposed fusion methods based on deep learning, but some early fusion networks cannot adaptively fuse images due to unreasonable design. Therefore, we propose a mask and cross-dynamic fusion-based network called MCDFN. This network adaptively preserves the salient features of infrared images and the texture details of visible images through an end-to-end fusion process. Specifically, we designed a two-stage fusion network. In the first stage, we train the autoencoder network so that the encoder and decoder learn feature extraction and reconstruction capabilities. In the second stage, the autoencoder is fixed, and we employ a fusion strategy combining mask and cross-dynamic fusion to train the entire fusion network. This strategy is conducive to the adaptive fusion of image information between infrared images and visible images in multiple dimensions. On the public TNO dataset and the RoadScene dataset, we selected nine different fusion methods to compare with our proposed method. Experimental results show that our proposed fusion method achieves good results on both datasets.
科研通智能强力驱动
Strongly Powered by AbleSci AI