Nonconvex Sparse Learning via Stochastic Optimization with Progressive Variance Reduction

05/09/2016
by   Xingguo Li, et al.
0

We propose a stochastic variance reduced optimization algorithm for solving sparse learning problems with cardinality constraints. Sufficient conditions are provided, under which the proposed algorithm enjoys strong linear convergence guarantees and optimal estimation accuracy in high dimensions. We further extend the proposed algorithm to an asynchronous parallel variant with a near linear speedup. Numerical experiments demonstrate the efficiency of our algorithm in terms of both parameter estimation and computational performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/25/2016

A First Order Free Lunch for SQRT-Lasso

Many statistical machine learning techniques sacrifice convenient comput...
research
06/19/2017

On Quadratic Convergence of DC Proximal Newton Algorithm for Nonconvex Sparse Learning in High Dimensions

We propose a DC proximal Newton algorithm for solving nonconvex regulari...
research
12/23/2014

Pathwise Coordinate Optimization for Sparse Learning: Algorithm and Theory

The pathwise coordinate optimization is one of the most important comput...
research
05/31/2020

Momentum-based variance-reduced proximal stochastic gradient method for composite nonconvex stochastic optimization

Stochastic gradient methods (SGMs) have been extensively used for solvin...
research
10/11/2022

Divergence Results and Convergence of a Variance Reduced Version of ADAM

Stochastic optimization algorithms using exponential moving averages of ...
research
01/30/2023

Distributed Stochastic Optimization under a General Variance Condition

Distributed stochastic optimization has drawn great attention recently d...
research
06/11/2020

Sparse recovery by reduced variance stochastic approximation

In this paper, we discuss application of iterative Stochastic Optimizati...

Please sign up or login with your details

Forgot password? Click here to reset