Learning to solve TV regularized problems with unrolled algorithms

10/19/2020
by   Hamza Cherkaoui, et al.
3

Total Variation (TV) is a popular regularization strategy that promotes piece-wise constant signals by constraining the ℓ_1-norm of the first order derivative of the estimated signal. The resulting optimization problem is usually solved using iterative algorithms such as proximal gradient descent, primal-dual algorithms or ADMM. However, such methods can require a very large number of iterations to converge to a suitable solution. In this paper, we accelerate such iterative algorithms by unfolding proximal gradient descent solvers in order to learn their parameters for 1D TV regularized problems. While this could be done using the synthesis formulation, we demonstrate that this leads to slower performances. The main difficulty in applying such methods in the analysis formulation lies in proposing a way to compute the derivatives through the proximal operator. As our main contribution, we develop and characterize two approaches to do so, describe their benefits and limitations, and discuss the regime where they can actually improve over iterative procedures. We validate those findings with experiments on synthetic and real data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/03/2014

Modular proximal optimization for multidimensional total-variation regularization

One of the most frequently used notions of "structured sparsity" is that...
research
01/14/2022

ℓ_1-norm constrained multi-block sparse canonical correlation analysis via proximal gradient descent

Multi-block CCA constructs linear relationships explaining coherent vari...
research
09/20/2016

A very fast iterative algorithm for TV-regularized image reconstruction with applications to low-dose and few-view CT

This paper concerns iterative reconstruction for low-dose and few-view C...
research
05/15/2019

Iterative Alpha Expansion for estimating gradient-sparse signals from linear measurements

We consider estimating a piecewise-constant image, or a gradient-sparse ...
research
12/19/2017

Snake: a Stochastic Proximal Gradient Algorithm for Regularized Problems over Large Graphs

A regularized optimization problem over a large unstructured graph is st...
research
05/03/2022

Proximal stabilized Interior Point Methods for quadratic programming and low-frequency-updates preconditioning techniques

In this work, in the context of Linear and Quadratic Programming, we int...
research
07/11/2021

Dual Optimization for Kolmogorov Model Learning Using Enhanced Gradient Descent

Data representation techniques have made a substantial contribution to a...

Please sign up or login with your details

Forgot password? Click here to reset