Plug-In Stochastic Gradient Method

11/08/2018
by   Yu Sun, et al.
0

Plug-and-play priors (PnP) is a popular framework for regularized signal reconstruction by using advanced denoisers within an iterative algorithm. In this paper, we discuss our recent online variant of PnP that uses only a subset of measurements at every iteration, which makes it scalable to very large datasets. We additionally present novel convergence results for both batch and online PnP algorithms.

READ FULL TEXT
research
09/12/2018

An Online Plug-and-Play Algorithm for Regularized Image Reconstruction

Plug-and-play priors (PnP) is a powerful framework for regularizing imag...
research
10/31/2018

Regularized Fourier Ptychography using an Online Plug-and-Play Algorithm

The plug-and-play priors (PnP) framework has been recently shown to achi...
research
11/25/2018

Inexact SARAH Algorithm for Stochastic Optimization

We develop and analyze a variant of variance reducing stochastic gradien...
research
09/04/2023

Corgi^2: A Hybrid Offline-Online Approach To Storage-Aware Data Shuffling For SGD

When using Stochastic Gradient Descent (SGD) for training machine learni...
research
06/05/2020

Scalable Plug-and-Play ADMM with Convergence Guarantees

Plug-and-play priors (PnP) is a broadly applicable methodology for solvi...
research
09/13/2023

A Flexible Online Framework for Projection-Based STFT Phase Retrieval

Several recent contributions in the field of iterative STFT phase retrie...
research
12/11/2012

A Learning Framework for Morphological Operators using Counter-Harmonic Mean

We present a novel framework for learning morphological operators using ...

Please sign up or login with your details

Forgot password? Click here to reset