 Plug-and-play priors (PnP) is a popular framework for regularized signal reconstruction by using advanced denoisers within an iterative algorithm. In this paper, we discuss our recent online variant of PnP that uses only a subset of measurements at every iteration, which makes it scalable to very large datasets. We additionally present novel convergence results for both batch and online PnP algorithms.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

Consider the problem of estimating an unknown signal

from a set of noisy measurements measurements . This task is often formulated as an optimization problem

 ˆx=argminx∈Rn{C(x)}withC(x)=D(x)+R(x), (1)

where is the data-fidelity term and is the regularizer promoting solutions with desirable properties. Some popular regularizers in imaging include nonnegativity, sparsity, and self-similarity.

Two common algorithms for solving optimization problem (1) are fast iterative shrinkage/thresholding algorithm (FISTA)  and alternating direction method of multipliers (ADMM) . These algorithms are suitable for solving large-scale imaging problems due to their low-computational complexity and ability to handle the non-smoothness of . Both ISTA and ADMM have modular structures in the sense that the prior on the image is only imposed via the proximal operator defined as

 proxγR(y)=argminx∈Rn{12∥x−y∥2ℓ2+γR(x)}. (2)

The mathematical equivalence of the proximal operator to regularized image denoising has recently inspired Venkatakrishnan et al.  to replace it with a more general denoising operator of controllable strength . The original formulation of this plug-and-play priors (PnP) framework relies on ADMM, but it has recently been shown that it can be equally effective when used with other proximal algorithms. For example,

 xk←denoiseσ(sk−1−γ∇D(sk−1)) (3a) sk←xk+((qk−1−1)/qk)(xk−xk−1), (3b)

where and lead to PnP-ISTA and PnP-FISTA, respectively, and is the step-size  .

In many applications, the data-fidelity term consists of a large number of component functions

 D(x)=1kk∑i=1Di(x)⇒∇D(x)=1kk∑i=1∇Di(x), (4)

where each and depend only on a subset of the measurements . Hence, when is large, traditional batch PnP algorithms may become impractical in terms of speed or memory requirements. The central idea of the recent PnP-SGD method  is to approximate the full gradient in (3a) with an average of component gradients

 ^∇D(x)=1bb∑j=1∇Dij(x), (5)

where

are independent random variables that are distributed uniformly over

. The minibatch size parameter controls the number of per-iteration gradient components. Fig. 1: Comparison between the batch and online PnP algorithms under a fixed measurement budget of 10 (left) and 30 (right). SNR (dB) is plotted against the number of iterations for three algorithms: PnP-SGD, PnP-FISTA, and PnP-ADMM. For the same per iteration cost, PnP-SGD can significantly outperform its batch counterparts. For a detailed discussion see .

We denote the denoiser-gradient operator by

 P(x)≜denoiseσ(x−γ∇D(x)) (6)

and its set of fixed points by

 fix(P)≜{x∈Rn:x=P(x)}. (7)

The convergence of both batch PnP-FISTA and online PnP-SGD was extensively discussed in . In particular, when is convex and smooth, and is an averaged operator (as well as other mild assumptions), it is possible to theoretically establish that the iterates of PnP-ISTA (without acceleration) gets arbitrarily close to with rate . Additionally, it is possible to establish that, in expectation, the iterates of PnP-SGD can also be made arbitrarily close to by increasing the minibatch parameter . As illustrated in Fig. 1, this makes PnP-SGD useful and mathematically sound alternative to the traditional batch PnP algorithms.

## References

•  Y. Sun, B. Wohlberg, and U. S. Kamilov, “An online plug-and-play algorithm for regularized image reconstruction,” arXiv:1809.04693 [cs.CV], 2018.
•  A. Beck and M. Teboulle, “Fast gradient-based algorithm for constrained total variation image denoising and deblurring problems,” IEEE Trans. Image Process., 2009.
•  M. V. Afonso, J. M.Bioucas-Dias, and M. A. T. Figueiredo, “Fast image recovery using variable splitting and constrained optimization,” IEEE Trans. Image Process., 2010.
•  S. V. Venkatakrishnan, C. A. Bouman, and B. Wohlberg, “Plug-and-play priors for model based reconstruction,” in Proc. GlobalSIP, 2013.
•  U. S. Kamilov, H. Mansour, and B. Wohlberg, “A plug-and-play priors approach for solving nonlinear imaging inverse problems,” IEEE Signal. Proc. Let., 2017.