The Practicality of Stochastic Optimization in Imaging Inverse Problems

by   Junqi Tang, et al.

In this work we investigate the practicality of stochastic gradient descent and recently introduced variants with variance-reduction techniques in imaging inverse problems. Such algorithms have been shown in the machine learning literature to have optimal complexities in theory, and provide great improvement empirically over the deterministic gradient methods. Surprisingly, in some tasks such as image deblurring, many of such methods fail to converge faster than the accelerated deterministic gradient methods, even in terms of epoch counts. We investigate this phenomenon and propose a theory-inspired mechanism to characterize whether an inverse problem should be preferred to be solved by stochastic optimization techniques. We derive conditions on the structure of the inverse problem for being a suitable application of stochastic gradient methods, using standard tools in numerical linear algebra. Based on our analysis, we provide the practitioners convenient ways to examine whether they should use stochastic gradient methods or the classical deterministic gradient methods to solve a given inverse problem. Our results also provide guidance on choosing appropriately the partition minibatch schemes. Finally, we propose an accelerated primal-dual SGD algorithm in order to tackle another key bottleneck of stochastic optimization which is the heavy computation of proximal operators. The proposed method has fast convergence rate in practice, and is able to efficiently handle non-smooth regularization terms which are coupled with linear operators.


page 6

page 23

page 26

page 27


An Analysis of Stochastic Variance Reduced Gradient for Linear Inverse Problems

Stochastic variance reduced gradient (SVRG) is a popular variance reduct...

A Fast Stochastic Plug-and-Play ADMM for Imaging Inverse Problems

In this work we propose an efficient stochastic plug-and-play (PnP) algo...

Exploiting the Structure: Stochastic Gradient Methods Using Raw Clusters

The amount of data available in the world is growing faster than our abi...

On the Saturation Phenomenon of Stochastic Gradient Descent for Linear Inverse Problems

Stochastic gradient descent (SGD) is a promising method for solving larg...

Accelerating Deep Unrolling Networks via Dimensionality Reduction

In this work we propose a new paradigm for designing efficient deep unro...

On the Convergence Rate of Projected Gradient Descent for a Back-Projection based Objective

Ill-posed linear inverse problems appear in many fields of imaging scien...

Active Probabilistic Inference on Matrices for Pre-Conditioning in Stochastic Optimization

Pre-conditioning is a well-known concept that can significantly improve ...