Stochastic Dual Coordinate Ascent with Adaptive Probabilities

02/27/2015
by   Dominik Csiba, et al.
0

This paper introduces AdaSDCA: an adaptive variant of stochastic dual coordinate ascent (SDCA) for solving the regularized empirical risk minimization problems. Our modification consists in allowing the method adaptively change the probability distribution over the dual variables throughout the iterative process. AdaSDCA achieves provably better complexity bound than SDCA with the best fixed probability distribution, known as importance sampling. However, it is of a theoretical character as it is expensive to implement. We also propose AdaSDCA+: a practical variant which in our experiments outperforms existing non-adaptive methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/10/2012

Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization

Stochastic Gradient Descent (SGD) has become popular for solving large s...
research
11/03/2018

Stochastic Primal-Dual Method for Empirical Risk Minimization with O(1) Per-Iteration Complexity

Regularized empirical risk minimization problem with linear predictor ap...
research
06/12/2015

Adaptive Stochastic Primal-Dual Coordinate Descent for Separable Saddle Point Problems

We consider a generic convex-concave saddle point problem with separable...
research
11/12/2012

Proximal Stochastic Dual Coordinate Ascent

We introduce a proximal version of dual coordinate ascent method. We dem...
research
03/07/2017

Faster Coordinate Descent via Adaptive Importance Sampling

Coordinate descent methods employ random partial updates of decision var...
research
09/14/2020

Effective Proximal Methods for Non-convex Non-smooth Regularized Learning

Sparse learning is a very important tool for mining useful information a...

Please sign up or login with your details

Forgot password? Click here to reset