Characterizing the SLOPE Trade-off: A Variational Perspective and the Donoho-Tanner Limit

05/27/2021
by   Zhiqi Bu, et al.
0

Sorted l1 regularization has been incorporated into many methods for solving high-dimensional statistical estimation problems, including the SLOPE estimator in linear regression. In this paper, we study how this relatively new regularization technique improves variable selection by characterizing the optimal SLOPE trade-off between the false discovery proportion (FDP) and true positive proportion (TPP) or, equivalently, between measures of type I error and power. Assuming a regime of linear sparsity and working under Gaussian random designs, we obtain an upper bound on the optimal trade-off for SLOPE, showing its capability of breaking the Donoho-Tanner power limit. To put it into perspective, this limit is the highest possible power that the Lasso, which is perhaps the most popular l1-based method, can achieve even with arbitrarily strong effect sizes. Next, we derive a tight lower bound that delineates the fundamental limit of sorted l1 regularization in optimally trading the FDP off for the TPP. Finally, we show that on any problem instance, SLOPE with a certain regularization sequence outperforms the Lasso, in the sense of having a smaller FDP, larger TPP and smaller l2 estimation risk simultaneously. Our proofs are based on a novel technique that reduces a variational calculus problem to a class of infinite-dimensional convex optimization problems and a very recent result from approximate message passing theory.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/27/2019

Asymptotics and Optimal Designs of SLOPE for Sparse Linear Regression

In sparse linear regression, the SLOPE estimator generalizes LASSO by as...
research
07/01/2020

The Price of Competition: Effect Size Heterogeneity Matters in High Dimensions

In high-dimensional linear regression, would increasing effect sizes alw...
research
01/17/2013

Hypothesis Testing in High-Dimensional Regression under the Gaussian Random Design Model: Asymptotic Theory

We consider linear regression in the high-dimensional regime where the n...
research
07/17/2019

Algorithmic Analysis and Statistical Estimation of SLOPE via Approximate Message Passing

SLOPE is a relatively new convex optimization procedure for high-dimensi...
research
02/14/2021

Efficient Designs of SLOPE Penalty Sequences in Finite Dimension

In linear regression, SLOPE is a new convex analysis method that general...
research
07/21/2020

The Complete Lasso Tradeoff Diagram

A fundamental problem in the high-dimensional regression is to understan...
research
05/09/2012

Group Sparse Priors for Covariance Estimation

Recently it has become popular to learn sparse Gaussian graphical models...

Please sign up or login with your details

Forgot password? Click here to reset