Coordinate Descent for SLOPE

10/26/2022
by   Johan Larsson, et al.
0

The lasso is the most famous sparse regression and feature selection method. One reason for its popularity is the speed at which the underlying optimization problem can be solved. Sorted L-One Penalized Estimation (SLOPE) is a generalization of the lasso with appealing statistical properties. In spite of this, the method has not yet reached widespread interest. A major reason for this is that current software packages that fit SLOPE rely on algorithms that perform poorly in high dimensions. To tackle this issue, we propose a new fast algorithm to solve the SLOPE optimization problem, which combines proximal gradient descent and proximal coordinate descent steps. We provide new results on the directional derivative of the SLOPE penalty and its related SLOPE thresholding operator, as well as provide convergence guarantees for our proposed solver. In extensive benchmarks on simulated and real data, we show that our method outperforms a long list of competing algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/07/2023

A Survey of Numerical Algorithms that can Solve the Lasso Problems

In statistics, the least absolute shrinkage and selection operator (Lass...
research
11/19/2020

Anderson acceleration of coordinate descent

Acceleration of first order methods is mainly obtained via inertial tech...
research
05/06/2012

Sparse group lasso and high dimensional multinomial classification

The sparse group lasso optimization problem is solved using a coordinate...
research
05/04/2021

Implicit differentiation for fast hyperparameter selection in non-smooth convex learning

Finding the optimal hyperparameters of a model can be cast as a bilevel ...
research
06/02/2020

Acceleration of Descent-based Optimization Algorithms via Carathéodory's Theorem

We propose a new technique to accelerate algorithms based on Gradient De...
research
03/04/2022

Improved Pathwise Coordinate Descent for Power Penalties

Pathwise coordinate descent algorithms have been used to compute entire ...
research
05/18/2020

The Trimmed Lasso: Sparse Recovery Guarantees and Practical Optimization by the Generalized Soft-Min Penalty

We present a new approach to solve the sparse approximation or best subs...

Please sign up or login with your details

Forgot password? Click here to reset