修补
计算机科学
人工智能
深度学习
图像复原
正规化(语言学)
先验概率
光学(聚焦)
图像(数学)
加速
投影机
迭代重建
计算机视觉
图像处理
光学
物理
操作系统
贝叶斯概率
作者
Taihui Li,Hengkang Wang,Zhong Zhuang,Ju Sun
标识
DOI:10.1109/cvpr52729.2023.01743
摘要
Deep image prior (DIP) has shown great promise in tackling a variety of image restoration (IR) and general visual inverse problems, needing no training data. However, the resulting optimization process is often very slow, inevitably hindering DIP's practical usage for time-sensitive scenarios. In this paper, we focus on IR, and propose two crucial modifications to DIP that help achieve substantial speedup: 1) optimizing the DIP seed while freezing randomly-initialized network weights, and 2) reducing the network depth. In addition, we reintroduce explicit priors, such as sparse gradient prior-encoded by total-variation regularization, to preserve the DIP peak performance. We evaluate the proposed method on three IR tasks, including image denoising, image super-resolution, and image inpainting, against the original DIP and variants, as well as the competing metaDIP that uses meta-learning to learn good initializers with extra data. Our method is a clear winner in obtaining competitive restoration quality in a minimal amount of time. Our code is available at https://github.com/sun-umn/Deep-Random-Projector.
科研通智能强力驱动
Strongly Powered by AbleSci AI