Efficient First Order Methods for Linear Composite Regularizers

04/07/2011
by   Andreas Argyriou, et al.
0

A wide class of regularization problems in machine learning and statistics employ a regularization term which is obtained by composing a simple convex function ω with a linear transformation. This setting includes Group Lasso methods, the Fused Lasso and other total variation methods, multi-task learning methods and many more. In this paper, we present a general approach for computing the proximity operator of this class of regularizers, under the assumption that the proximity operator of the function ω is known in advance. Our approach builds on a recent line of research on optimal first order optimization methods and uses fixed point iterations for numerically computing the proximity operator. It is more general than current approaches and, as we show with numerical simulations, computationally more efficient than available first order methods which do not achieve the optimal rate. In particular, our method outperforms state of the art O(1/T) methods for overlapping Group Lasso and matches optimal O(1/T^2) methods for the Fused Lasso and tree structured Group Lasso.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/25/2013

On Sparsity Inducing Regularization Methods for Machine Learning

During the past years there has been an explosion of interest in learnin...
research
06/26/2011

A General Framework for Structured Sparsity via Proximal Optimization

We study a generalized framework for structured sparsity. It extends the...
research
08/17/2011

Structured Sparsity and Generalization

We present a data dependent generalization bound for a large class of re...
research
07/31/2018

Adaptive Non-Parametric Regression With the K-NN Fused Lasso

The fused lasso, also known as total-variation denoising, is a locally-a...
research
08/24/2021

Adaptive Group Lasso Neural Network Models for Functions of Few Variables and Time-Dependent Data

In this paper, we propose an adaptive group Lasso deep neural network fo...
research
12/14/2019

A subspace-accelerated split Bregman method for sparse data recovery with joint l1-type regularizers

We propose a subspace-accelerated Bregman method for the linearly constr...
research
06/02/2021

Smooth Bilevel Programming for Sparse Regularization

Iteratively reweighted least square (IRLS) is a popular approach to solv...

Please sign up or login with your details

Forgot password? Click here to reset