Linear convergence of SDCA in statistical estimation

01/26/2017
by   Chao Qu, et al.
0

In this paper, we consider stochastic dual coordinate (SDCA) without strongly convex assumption or convex assumption. We show that SDCA converges linearly under mild conditions termed restricted strong convexity. This covers a wide array of popular statistical models including Lasso, group Lasso, and logistic regression with ℓ_1 regularization, corrected Lasso and linear regression with SCAD regularizer. This significantly improves previous convergence results on SDCA for problems that are not strongly convex. As a by product, we derive a dual free form of SDCA that can handle general regularization term, which is of interest by itself.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/19/2017

SAGA and Restricted Strong Convexity

SAGA is a fast incremental gradient method on the finite sum problem and...
research
06/08/2015

Linear Convergence of the Randomized Feasible Descent Method Under the Weak Strong Convexity Assumption

In this paper we generalize the framework of the feasible descent method...
research
11/07/2016

Linear Convergence of SVRG in Statistical Estimation

SVRG and its variants are among the state of art optimization algorithms...
research
08/20/2017

Stochastic Primal-Dual Proximal ExtraGradient Descent for Compositely Regularized Optimization

We consider a wide range of regularized stochastic minimization problems...
research
09/22/2020

Strongly Convex Divergences

We consider a sub-class of the f-divergences satisfying a stronger conve...
research
03/25/2013

On Sparsity Inducing Regularization Methods for Machine Learning

During the past years there has been an explosion of interest in learnin...
research
07/10/2018

Dual optimization for convex constrained objectives without the gradient-Lipschitz assumption

The minimization of convex objectives coming from linear supervised lear...

Please sign up or login with your details

Forgot password? Click here to reset