Plug-In Stochastic Gradient Method

11/08/2018
by   Yu Sun, et al.
0

Plug-and-play priors (PnP) is a popular framework for regularized signal reconstruction by using advanced denoisers within an iterative algorithm. In this paper, we discuss our recent online variant of PnP that uses only a subset of measurements at every iteration, which makes it scalable to very large datasets. We additionally present novel convergence results for both batch and online PnP algorithms.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

09/12/2018

An Online Plug-and-Play Algorithm for Regularized Image Reconstruction

Plug-and-play priors (PnP) is a powerful framework for regularizing imag...
10/31/2018

Regularized Fourier Ptychography using an Online Plug-and-Play Algorithm

The plug-and-play priors (PnP) framework has been recently shown to achi...
06/05/2020

Scalable Plug-and-Play ADMM with Convergence Guarantees

Plug-and-play priors (PnP) is a broadly applicable methodology for solvi...
11/19/2015

Stochastic gradient method with accelerated stochastic dynamics

In this paper, we propose a novel technique to implement stochastic grad...
07/20/2018

signProx: One-Bit Proximal Algorithm for Nonconvex Stochastic Optimization

Stochastic gradient descent (SGD) is one of the most widely used optimiz...
11/20/2021

Bayesian Learning via Neural Schrödinger-Föllmer Flows

In this work we explore a new framework for approximate Bayesian inferen...
06/22/2021

Stochastic Polyak Stepsize with a Moving Target

We propose a new stochastic gradient method that uses recorded past loss...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Consider the problem of estimating an unknown signal

from a set of noisy measurements measurements . This task is often formulated as an optimization problem

(1)

where is the data-fidelity term and is the regularizer promoting solutions with desirable properties. Some popular regularizers in imaging include nonnegativity, sparsity, and self-similarity.

Two common algorithms for solving optimization problem (1) are fast iterative shrinkage/thresholding algorithm (FISTA) [2] and alternating direction method of multipliers (ADMM) [3]. These algorithms are suitable for solving large-scale imaging problems due to their low-computational complexity and ability to handle the non-smoothness of . Both ISTA and ADMM have modular structures in the sense that the prior on the image is only imposed via the proximal operator defined as

(2)

The mathematical equivalence of the proximal operator to regularized image denoising has recently inspired Venkatakrishnan et al. [4] to replace it with a more general denoising operator of controllable strength . The original formulation of this plug-and-play priors (PnP) framework relies on ADMM, but it has recently been shown that it can be equally effective when used with other proximal algorithms. For example,

(3a)
(3b)

where and lead to PnP-ISTA and PnP-FISTA, respectively, and is the step-size [5] .

In many applications, the data-fidelity term consists of a large number of component functions

(4)

where each and depend only on a subset of the measurements . Hence, when is large, traditional batch PnP algorithms may become impractical in terms of speed or memory requirements. The central idea of the recent PnP-SGD method [1] is to approximate the full gradient in (3a) with an average of component gradients

(5)

where

are independent random variables that are distributed uniformly over

. The minibatch size parameter controls the number of per-iteration gradient components.

Fig. 1: Comparison between the batch and online PnP algorithms under a fixed measurement budget of 10 (left) and 30 (right). SNR (dB) is plotted against the number of iterations for three algorithms: PnP-SGD, PnP-FISTA, and PnP-ADMM. For the same per iteration cost, PnP-SGD can significantly outperform its batch counterparts. For a detailed discussion see [1].

We denote the denoiser-gradient operator by

(6)

and its set of fixed points by

(7)

The convergence of both batch PnP-FISTA and online PnP-SGD was extensively discussed in [1]. In particular, when is convex and smooth, and is an averaged operator (as well as other mild assumptions), it is possible to theoretically establish that the iterates of PnP-ISTA (without acceleration) gets arbitrarily close to with rate . Additionally, it is possible to establish that, in expectation, the iterates of PnP-SGD can also be made arbitrarily close to by increasing the minibatch parameter . As illustrated in Fig. 1, this makes PnP-SGD useful and mathematically sound alternative to the traditional batch PnP algorithms.

References

  • [1] Y. Sun, B. Wohlberg, and U. S. Kamilov, “An online plug-and-play algorithm for regularized image reconstruction,” arXiv:1809.04693 [cs.CV], 2018.
  • [2] A. Beck and M. Teboulle, “Fast gradient-based algorithm for constrained total variation image denoising and deblurring problems,” IEEE Trans. Image Process., 2009.
  • [3] M. V. Afonso, J. M.Bioucas-Dias, and M. A. T. Figueiredo, “Fast image recovery using variable splitting and constrained optimization,” IEEE Trans. Image Process., 2010.
  • [4] S. V. Venkatakrishnan, C. A. Bouman, and B. Wohlberg, “Plug-and-play priors for model based reconstruction,” in Proc. GlobalSIP, 2013.
  • [5] U. S. Kamilov, H. Mansour, and B. Wohlberg, “A plug-and-play priors approach for solving nonlinear imaging inverse problems,” IEEE Signal. Proc. Let., 2017.