Smooth over-parameterized solvers for non-smooth structured optimization

05/03/2022
by   Clarice Poon, et al.
0

Non-smooth optimization is a core ingredient of many imaging or machine learning pipelines. Non-smoothness encodes structural constraints on the solutions, such as sparsity, group sparsity, low-rank and sharp edges. It is also the basis for the definition of robust loss functions and scale-free functionals such as square-root Lasso. Standard approaches to deal with non-smoothness leverage either proximal splitting or coordinate descent. These approaches are effective but usually require parameter tuning, preconditioning or some sort of support pruning. In this work, we advocate and study a different route, which operates a non-convex but smooth over-parametrization of the underlying non-smooth optimization problems. This generalizes quadratic variational forms that are at the heart of the popular Iterative Reweighted Least Squares (IRLS). Our main theoretical contribution connects gradient descent on this reformulation to a mirror descent flow with a varying Hessian metric. This analysis is crucial to derive convergence bounds that are dimension-free. This explains the efficiency of the method when using small grid sizes in imaging. Our main algorithmic contribution is to apply the Variable Projection (VarPro) method which defines a new formulation by explicitly minimizing over part of the variables. This leads to a better conditioning of the minimized functional and improves the convergence of simple but very efficient gradient-based methods, for instance quasi-Newton solvers. We exemplify the use of this new solver for the resolution of regularized regression problems for inverse problems and supervised learning, including total variation prior and non-convex regularizers.

READ FULL TEXT

page 12

page 28

research
06/02/2021

Smooth Bilevel Programming for Sparse Regularization

Iteratively reweighted least square (IRLS) is a popular approach to solv...
research
04/25/2018

Convergence guarantees for a class of non-convex and non-smooth optimization problems

We consider the problem of finding critical points of functions that are...
research
03/05/2019

Inertial Block Mirror Descent Method for Non-Convex Non-Smooth Optimization

In this paper, we propose inertial versions of block coordinate descent ...
research
03/14/2022

A Linearly Convergent Douglas-Rachford Splitting Solver for Markovian Information-Theoretic Optimization Problems

In this work, we propose solving the Information bottleneck (IB) and Pri...
research
02/16/2019

Screening Rules for Lasso with Non-Convex Sparse Regularizers

Leveraging on the convexity of the Lasso problem , screening rules help ...
research
07/07/2014

Low Complexity Regularization of Linear Inverse Problems

Inverse problems and regularization theory is a central theme in contemp...
research
06/26/2020

Understanding Notions of Stationarity in Non-Smooth Optimization

Many contemporary applications in signal processing and machine learning...

Please sign up or login with your details

Forgot password? Click here to reset