计算机科学
人工智能
重影
计算机视觉
图像融合
水准点(测量)
光学(聚焦)
图像(数学)
融合
编码(集合论)
深度学习
语言学
哲学
物理
大地测量学
集合(抽象数据类型)
光学
程序设计语言
地理
作者
Xiao Tan,Huaian Chen,Rui Zhang,Qihan Wang,Yan Kan,Jinjin Zheng,Yi Jin,Enhong Chen
标识
DOI:10.1109/tip.2023.3315123
摘要
Recently, learning-based multi-exposure fusion (MEF) methods have made significant improvements. However, these methods mainly focus on static scenes and are prone to generate ghosting artifacts when tackling a more common scenario, i.e., the input images include motion, due to the lack of a benchmark dataset and solution for dynamic scenes. In this paper, we fill this gap by creating an MEF dataset of dynamic scenes, which contains multi-exposure image sequences and their corresponding high-quality reference images. To construct such a dataset, we propose a 'static-for-dynamic' strategy to obtain multi-exposure sequences with motions and their corresponding reference images. To the best of our knowledge, this is the first MEF dataset of dynamic scenes. Correspondingly, we propose a deep dynamic MEF (DDMEF) framework to reconstruct a ghost-free high-quality image from only two differently exposed images of a dynamic scene. DDMEF is achieved through two steps: pre-enhancement-based alignment and privilege-information-guided fusion. The former pre-enhances the input images before alignment, which helps to address the misalignments caused by the significant exposure difference. The latter introduces a privilege distillation scheme with an information attention transfer loss, which effectively improves the deghosting ability of the fusion network. Extensive qualitative and quantitative experimental results show that the proposed method outperforms state-of-the-art dynamic MEF methods. The source code and dataset are released at https://github.com/Tx000/Deep_dynamicMEF.
科研通智能强力驱动
Strongly Powered by AbleSci AI