Efficient Designs of SLOPE Penalty Sequences in Finite Dimension

02/14/2021
by   Yiliang Zhang, et al.
0

In linear regression, SLOPE is a new convex analysis method that generalizes the Lasso via the sorted L1 penalty: larger fitted coefficients are penalized more heavily. This magnitude-dependent regularization requires an input of penalty sequence λ, instead of a scalar penalty as in the Lasso case, thus making the design extremely expensive in computation. In this paper, we propose two efficient algorithms to design the possibly high-dimensional SLOPE penalty, in order to minimize the mean squared error. For Gaussian data matrices, we propose a first order Projected Gradient Descent (PGD) under the Approximate Message Passing regime. For general data matrices, we present a zero-th order Coordinate Descent (CD) to design a sub-class of SLOPE, referred to as the k-level SLOPE. Our CD allows a useful trade-off between the accuracy and the computation speed. We demonstrate the performance of SLOPE with our designs via extensive experiments on synthetic data and real-world datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/11/2020

Asymptotic errors for convex penalized linear regression beyond Gaussian matrices

We consider the problem of learning a coefficient vector x_0 in R^N from...
research
07/02/2021

Asymptotic Statistical Analysis of Sparse Group LASSO via Approximate Message Passing Algorithm

Sparse Group LASSO (SGL) is a regularized model for high-dimensional lin...
research
07/17/2019

Algorithmic Analysis and Statistical Estimation of SLOPE via Approximate Message Passing

SLOPE is a relatively new convex optimization procedure for high-dimensi...
research
03/27/2019

Asymptotics and Optimal Designs of SLOPE for Sparse Linear Regression

In sparse linear regression, the SLOPE estimator generalizes LASSO by as...
research
10/02/2017

Lasso Regularization Paths for NARMAX Models via Coordinate Descent

We propose a new algorithm for estimating NARMAX models with L1 regulari...
research
05/27/2021

Characterizing the SLOPE Trade-off: A Variational Perspective and the Donoho-Tanner Limit

Sorted l1 regularization has been incorporated into many methods for sol...
research
04/21/2020

Normalizing Flow Regression

In this letter we propose a convex approach to learning expressive scala...

Please sign up or login with your details

Forgot password? Click here to reset