增采样
人工智能
计算机科学
背景(考古学)
图像融合
模式识别(心理学)
图像(数学)
融合
特征提取
特征(语言学)
图像分辨率
适应性
分辨率(逻辑)
计算机视觉
语言学
哲学
古生物学
生态学
生物
作者
Wenxiang Zhang,Chunmeng Wang,Jun Zhu
出处
期刊:Sensors
[Multidisciplinary Digital Publishing Institute]
日期:2025-04-16
卷期号:25 (8): 2500-2500
被引量:2
摘要
Recently, deep learning-based multi-exposure image fusion methods have been widely explored due to their high efficiency and adaptability. However, most existing multi-exposure image fusion methods have insufficient feature extraction ability for recovering information and details in extremely exposed areas. In order to solve this problem, we propose a multi-exposure image fusion method based on a low-resolution context aggregation attention network (MEF-CAAN). First, we feed the low-resolution version of the input images to CAAN to predict their low-resolution weight maps. Then, the high-resolution weight maps are generated by guided filtering for upsampling (GFU). Finally, the high-resolution fused image is generated by a weighted summation operation. Our proposed network is unsupervised and adaptively adjusts the weights of channels to achieve better feature extraction. Experimental results show that our method outperforms existing state-of-the-art methods by both quantitative and qualitative evaluation.
科研通智能强力驱动
Strongly Powered by AbleSci AI