Log In Sign Up

Efficient Designs of SLOPE Penalty Sequences in Finite Dimension

by   Yiliang Zhang, et al.

In linear regression, SLOPE is a new convex analysis method that generalizes the Lasso via the sorted L1 penalty: larger fitted coefficients are penalized more heavily. This magnitude-dependent regularization requires an input of penalty sequence λ, instead of a scalar penalty as in the Lasso case, thus making the design extremely expensive in computation. In this paper, we propose two efficient algorithms to design the possibly high-dimensional SLOPE penalty, in order to minimize the mean squared error. For Gaussian data matrices, we propose a first order Projected Gradient Descent (PGD) under the Approximate Message Passing regime. For general data matrices, we present a zero-th order Coordinate Descent (CD) to design a sub-class of SLOPE, referred to as the k-level SLOPE. Our CD allows a useful trade-off between the accuracy and the computation speed. We demonstrate the performance of SLOPE with our designs via extensive experiments on synthetic data and real-world datasets.


page 1

page 2

page 3

page 4


Asymptotic errors for convex penalized linear regression beyond Gaussian matrices

We consider the problem of learning a coefficient vector x_0 in R^N from...

Asymptotic Statistical Analysis of Sparse Group LASSO via Approximate Message Passing Algorithm

Sparse Group LASSO (SGL) is a regularized model for high-dimensional lin...

Algorithmic Analysis and Statistical Estimation of SLOPE via Approximate Message Passing

SLOPE is a relatively new convex optimization procedure for high-dimensi...

Lasso Regularization Paths for NARMAX Models via Coordinate Descent

We propose a new algorithm for estimating NARMAX models with L1 regulari...

Asymptotics and Optimal Designs of SLOPE for Sparse Linear Regression

In sparse linear regression, the SLOPE estimator generalizes LASSO by as...

Characterizing the SLOPE Trade-off: A Variational Perspective and the Donoho-Tanner Limit

Sorted l1 regularization has been incorporated into many methods for sol...

Second order Poincaré inequalities and de-biasing arbitrary convex regularizers when p/n → γ

A new Central Limit Theorem (CLT) is developed for random variables of t...