Smoothing Proximal Gradient Method for General Structured Sparse Learning

02/14/2012
by   Xi Chen, et al.
0

We study the problem of learning high dimensional regression models regularized by a structured-sparsity-inducing penalty that encodes prior structural information on either input or output sides. We consider two widely adopted types of such penalties as our motivating examples: 1) overlapping group lasso penalty, based on the l1/l2 mixed-norm penalty, and 2) graph-guided fusion penalty. For both types of penalties, due to their non-separability, developing an efficient optimization method has remained a challenging problem. In this paper, we propose a general optimization approach, called smoothing proximal gradient method, which can solve the structured sparse regression problems with a smooth convex loss and a wide spectrum of structured-sparsity-inducing penalties. Our approach is based on a general smoothing technique of Nesterov. It achieves a convergence rate faster than the standard first-order method, subgradient method, and is much more scalable than the most widely used interior-point method. Numerical results are reported to demonstrate the efficiency and scalability of the proposed method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/31/2016

Iterative Smoothing Proximal Gradient for Regression with Structured Sparsity

In the context high-dimensionnal predictive models, we consider the prob...
research
09/08/2009

Tree-guided group lasso for multi-response regression with structured sparsity, with an application to eQTL mapping

We consider the problem of estimating a sparse multi-response regression...
research
05/04/2011

Structured Sparsity via Alternating Direction Methods

We consider a class of sparse learning problems in high dimensional feat...
research
09/03/2012

Proximal methods for the latent group lasso penalty

We consider a regularized least squares problem, with regularization by ...
research
09/08/2015

A Scalable and Extensible Framework for Superposition-Structured Models

In many learning tasks, structural models usually lead to better interpr...
research
02/07/2021

Structured Sparsity Inducing Adaptive Optimizers for Deep Learning

The parameters of a neural network are naturally organized in groups, so...
research
08/15/2012

Efficient Algorithm for Extremely Large Multi-task Regression with Massive Structured Sparsity

We develop a highly scalable optimization method called "hierarchical gr...

Please sign up or login with your details

Forgot password? Click here to reset