The non-overlapping statistical approximation to overlapping group lasso

11/16/2022
by   Mingyu Qi, et al.
0

Group lasso is a commonly used regularization method in statistical learning in which parameters are eliminated from the model according to predefined groups. However, when the groups overlap, optimizing the group lasso penalized objective can be time-consuming on large-scale problems because of the non-separability induced by the overlapping groups. This bottleneck has seriously limited the application of overlapping group lasso regularization in many modern problems, such as gene pathway selection and graphical model estimation. In this paper, we propose a separable penalty as an approximation of the overlapping group lasso penalty. Thanks to the separability, the computation of regularization based on our penalty is substantially faster than that of the overlapping group lasso, especially for large-scale and high-dimensional problems. We show that the penalty is the tightest separable relaxation of the overlapping group lasso norm within the family of ℓ_q_1/ℓ_q_2 norms. Moreover, we show that the estimator based on the proposed separable penalty is statistically equivalent to the one based on the overlapping group lasso penalty with respect to their error bounds and the rate-optimal performance under the squared loss. We demonstrate the faster computational time and statistical equivalence of our method compared with the overlapping group lasso in simulation examples and a classification problem of cancer tumors based on gene expression and multiple gene pathways.

READ FULL TEXT
research
10/03/2011

Group Lasso with Overlaps: the Latent Group Lasso approach

We study a norm for structured sparsity which leads to sparse linear pre...
research
08/17/2011

Structured Sparsity and Generalization

We present a data dependent generalization bound for a large class of re...
research
09/03/2012

Proximal methods for the latent group lasso penalty

We consider a regularized least squares problem, with regularization by ...
research
09/08/2018

Computational Sufficiency, Reflection Groups, and Generalized Lasso Penalties

We study estimators with generalized lasso penalties within the computat...
research
11/20/2013

Sparse Overlapping Sets Lasso for Multitask Learning and its Application to fMRI Analysis

Multitask learning can be effective when features useful in one task are...
research
01/16/2013

Sparse Penalty in Deep Belief Networks: Using the Mixed Norm Constraint

Deep Belief Networks (DBN) have been successfully applied on popular mac...
research
07/28/2018

Group-sparse SVD Models and Their Applications in Biological Data

Sparse Singular Value Decomposition (SVD) models have been proposed for ...

Please sign up or login with your details

Forgot password? Click here to reset