Consider the problem of estimating an unknown signalfrom a set of noisy measurements measurements . This task is often formulated as an optimization problem
where is the data-fidelity term and is the regularizer promoting solutions with desirable properties. Some popular regularizers in imaging include nonnegativity, sparsity, and self-similarity.
Two common algorithms for solving optimization problem (1) are fast iterative shrinkage/thresholding algorithm (FISTA)  and alternating direction method of multipliers (ADMM) . These algorithms are suitable for solving large-scale imaging problems due to their low-computational complexity and ability to handle the non-smoothness of . Both ISTA and ADMM have modular structures in the sense that the prior on the image is only imposed via the proximal operator defined as
The mathematical equivalence of the proximal operator to regularized image denoising has recently inspired Venkatakrishnan et al.  to replace it with a more general denoising operator of controllable strength . The original formulation of this plug-and-play priors (PnP) framework relies on ADMM, but it has recently been shown that it can be equally effective when used with other proximal algorithms. For example,
where and lead to PnP-ISTA and PnP-FISTA, respectively, and is the step-size  .
In many applications, the data-fidelity term consists of a large number of component functions
where each and depend only on a subset of the measurements . Hence, when is large, traditional batch PnP algorithms may become impractical in terms of speed or memory requirements. The central idea of the recent PnP-SGD method  is to approximate the full gradient in (3a) with an average of component gradients
are independent random variables that are distributed uniformly over. The minibatch size parameter controls the number of per-iteration gradient components.
We denote the denoiser-gradient operator by
and its set of fixed points by
The convergence of both batch PnP-FISTA and online PnP-SGD was extensively discussed in . In particular, when is convex and smooth, and is an averaged operator (as well as other mild assumptions), it is possible to theoretically establish that the iterates of PnP-ISTA (without acceleration) gets arbitrarily close to with rate . Additionally, it is possible to establish that, in expectation, the iterates of PnP-SGD can also be made arbitrarily close to by increasing the minibatch parameter . As illustrated in Fig. 1, this makes PnP-SGD useful and mathematically sound alternative to the traditional batch PnP algorithms.
-  Y. Sun, B. Wohlberg, and U. S. Kamilov, “An online plug-and-play algorithm for regularized image reconstruction,” arXiv:1809.04693 [cs.CV], 2018.
-  A. Beck and M. Teboulle, “Fast gradient-based algorithm for constrained total variation image denoising and deblurring problems,” IEEE Trans. Image Process., 2009.
-  M. V. Afonso, J. M.Bioucas-Dias, and M. A. T. Figueiredo, “Fast image recovery using variable splitting and constrained optimization,” IEEE Trans. Image Process., 2010.
-  S. V. Venkatakrishnan, C. A. Bouman, and B. Wohlberg, “Plug-and-play priors for model based reconstruction,” in Proc. GlobalSIP, 2013.
-  U. S. Kamilov, H. Mansour, and B. Wohlberg, “A plug-and-play priors approach for solving nonlinear imaging inverse problems,” IEEE Signal. Proc. Let., 2017.