修补
先验概率
人工智能
计算机科学
发电机(电路理论)
图像复原
相似性(几何)
深度学习
图像(数学)
计算机视觉
模式识别(心理学)
卷积神经网络
机器学习
图像处理
贝叶斯概率
物理
量子力学
功率(物理)
作者
Victor Lempitsky,Andrea Vedaldi,Dmitry Ulyanov
标识
DOI:10.1109/cvpr.2018.00984
摘要
Deep convolutional networks have become a popular tool for image generation and restoration. Generally, their excellent performance is imputed to their ability to learn realistic image priors from a large number of example images. In this paper, we show that, on the contrary, the structure of a generator network is sufficient to capture a great deal of low-level image statistics prior to any learning. In order to do so, we show that a randomly-initialized neural network can be used as a handcrafted prior with excellent results in standard inverse problems such as denoising, superresolution, and inpainting. Furthermore, the same prior can be used to invert deep neural representations to diagnose them, and to restore images based on flash-no flash input pairs. Apart from its diverse applications, our approach highlights the inductive bias captured by standard generator network architectures. It also bridges the gap between two very popular families of image restoration methods: learning-based methods using deep convolutional networks and learning-free methods based on handcrafted image priors such as self-similarity.
科研通智能强力驱动
Strongly Powered by AbleSci AI