正规化(语言学)
计算机科学
核(代数)
算法
凸函数
正多边形
消息传递
趋同(经济学)
数学优化
数学
人工智能
离散数学
几何学
经济增长
经济
程序设计语言
作者
Ruturaj G. Gavaskar,Chirayu D. Athalye,Kunal N. Chaudhury
标识
DOI:10.1109/tip.2021.3075092
摘要
In plug-and-play (PnP) regularization, the knowledge of the forward model is combined with a powerful denoiser to obtain state-of-the-art image reconstructions. This is typically done by taking a proximal algorithm such as FISTA or ADMM, and formally replacing the proximal map associated with a regularizer by nonlocal means, BM3D or a CNN denoiser. Each iterate of the resulting PnP algorithm involves some kind of inversion of the forward model followed by denoiser-induced regularization. A natural question in this regard is that of optimality, namely, do the PnP iterations minimize some f+g , where f is a loss function associated with the forward model and g is a regularizer? This has a straightforward solution if the denoiser can be expressed as a proximal map, as was shown to be the case for a class of linear symmetric denoisers. However, this result excludes kernel denoisers such as nonlocal means that are inherently non-symmetric. In this paper, we prove that a broader class of linear denoisers (including symmetric denoisers and kernel denoisers) can be expressed as a proximal map of some convex regularizer g . An algorithmic implication of this result for non-symmetric denoisers is that it necessitates appropriate modifications in the PnP updates to ensure convergence to a minimum of f+g . Apart from the convergence guarantee, the modified PnP algorithms are shown to produce good restorations.
科研通智能强力驱动
Strongly Powered by AbleSci AI