Iterative Smoothing Proximal Gradient for Regression with Structured Sparsity

by   Fouad Hadj-Selem, et al.

In the context high-dimensionnal predictive models, we consider the problem of optimizing the sum of a smooth convex loss and a non-smooth convex penalty, whose proximal operator is known, and a non-smooth convex structured penalties such as total variation, or overlapping group lasso. We propose to smooth the structured penalty, since it allows a generic framework in which a large range of non-smooth convex structured penalties can be minimized without computing their proximal operators that are either not known or expensive to compute. The problem can be minimized with an accelerated proximal gradient method to benefit of (non-smoothed) sparsity-inducing penalties. We propose an expression of the duality gap to control the convergence of the global non-smooth problem. This expression is applicable to a large range of structured penalties. However, smoothing methods have many limitations that the proposed solver aims to overcome. Therefore, we propose a continuation algorithm, called CONESTA, that dynamically generates a decreasing sequence of smoothing parameters in order to maintain the optimal convergence speed towards any globally desired precision. At each continuation step, the aforementioned duality gap provides the current error and thus the next smaller prescribed precision. Given this precision, we propose a expression to calculate the optimal smoothing parameter, that minimizes the number of iterations to reach such precision. We demonstrate that CONESTA achieves an improved convergence rate compared to classical (without continuation) proximal gradient smoothing. Moreover, experiments conducted on both simulated and high-dimensional neuroimaging (MRI) data, exhibit that CONESTA significantly outperforms the excessive gap method, ADMM, classical proximal gradient smoothing and inexact FISTA in terms of convergence speed and/or precision of the solution.


page 1

page 2

page 3

page 4


Smoothing Proximal Gradient Method for General Structured Sparse Learning

We study the problem of learning high dimensional regression models regu...

Structured Sparsity Inducing Adaptive Optimizers for Deep Learning

The parameters of a neural network are naturally organized in groups, so...

Predictive support recovery with TV-Elastic Net penalty and logistic regression: an application to structural MRI

The use of machine-learning in neuroimaging offers new perspectives in e...

Inexact Online Proximal-gradient Method for Time-varying Convex Optimization

This paper considers an online proximal-gradient method to track the min...

FAASTA: A fast solver for total-variation regularization of ill-conditioned problems with application to brain imaging

The total variation (TV) penalty, as many other analysis-sparsity proble...

A single-phase, proximal path-following framework

We propose a new proximal, path-following framework for a class of const...

Tree-guided group lasso for multi-response regression with structured sparsity, with an application to eQTL mapping

We consider the problem of estimating a sparse multi-response regression...

Please sign up or login with your details

Forgot password? Click here to reset