鉴别器
人工智能
发电机(电路理论)
对抗制
计算机科学
基本事实
计算机视觉
图像(数学)
编码(集合论)
代表(政治)
失真(音乐)
亮度
模式识别(心理学)
功率(物理)
探测器
计算机网络
带宽(计算)
法学
程序设计语言
放大器
集合(抽象数据类型)
量子力学
政治学
物理
政治
电信
作者
Han Xu,Jiayi Ma,Xiaoping Zhang
出处
期刊:IEEE transactions on image processing
[Institute of Electrical and Electronics Engineers]
日期:2020-01-01
卷期号:29: 7203-7216
被引量:125
标识
DOI:10.1109/tip.2020.2999855
摘要
In this paper, we present an end-to-end architecture for multi-exposure image fusion based on generative adversarial networks, termed as MEF-GAN. In our architecture, a generator network and a discriminator network are trained simultaneously to form an adversarial relationship. The generator is trained to generate a real-like fused image based on the given source images which is expected to fool the discriminator. Correspondingly, the discriminator is trained to distinguish the generated fused images from the ground truth. The adversarial relationship makes the fused image not limited to the restriction of the content loss. Therefore, the fused images are closer to the ground truth in terms of probability distribution, which can compensate for the insufficiency of single content loss. Moreover, aiming at the problem that the luminance of multi-exposure images varies greatly with spatial location, the self-attention mechanism is employed in our architecture to allow for attention-driven and long-range dependency. Thus, local distortion, confusing results, or inappropriate representation can be corrected in the fused image. Qualitative and quantitative experiments are performed on publicly available datasets, where the results demonstrate that MEF-GAN outperforms the state-of-the-art, in terms of both visual effect and objective evaluation metrics. Our code is publicly available at https://github.com/jiayi-ma/MEF-GAN.
科研通智能强力驱动
Strongly Powered by AbleSci AI