DeepAI AI Chat
Log In Sign Up

Linear convergence of SDCA in statistical estimation

by   Chao Qu, et al.

In this paper, we consider stochastic dual coordinate (SDCA) without strongly convex assumption or convex assumption. We show that SDCA converges linearly under mild conditions termed restricted strong convexity. This covers a wide array of popular statistical models including Lasso, group Lasso, and logistic regression with ℓ_1 regularization, corrected Lasso and linear regression with SCAD regularizer. This significantly improves previous convergence results on SDCA for problems that are not strongly convex. As a by product, we derive a dual free form of SDCA that can handle general regularization term, which is of interest by itself.


page 1

page 2

page 3

page 4


SAGA and Restricted Strong Convexity

SAGA is a fast incremental gradient method on the finite sum problem and...

Linear Convergence of the Randomized Feasible Descent Method Under the Weak Strong Convexity Assumption

In this paper we generalize the framework of the feasible descent method...

Linear Convergence of SVRG in Statistical Estimation

SVRG and its variants are among the state of art optimization algorithms...

Stochastic Primal-Dual Proximal ExtraGradient Descent for Compositely Regularized Optimization

We consider a wide range of regularized stochastic minimization problems...

Strongly Convex Divergences

We consider a sub-class of the f-divergences satisfying a stronger conve...

On Sparsity Inducing Regularization Methods for Machine Learning

During the past years there has been an explosion of interest in learnin...

Dual optimization for convex constrained objectives without the gradient-Lipschitz assumption

The minimization of convex objectives coming from linear supervised lear...