Accelerated Proximal Stochastic Dual Coordinate Ascent for Regularized Loss Minimization

09/10/2013
by   Shai Shalev-Shwartz, et al.
0

We introduce a proximal version of the stochastic dual coordinate ascent method and show how to accelerate the method using an inner-outer iteration procedure. We analyze the runtime of the framework and obtain rates that improve state-of-the-art results for various key machine learning optimization problems including SVM, logistic regression, ridge regression, Lasso, and multiclass SVM. Experiments validate our theoretical findings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/12/2012

Proximal Stochastic Dual Coordinate Ascent

We introduce a proximal version of dual coordinate ascent method. We dem...
research
09/10/2012

Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization

Stochastic Gradient Descent (SGD) has become popular for solving large s...
research
04/13/2016

A General Distributed Dual Coordinate Optimization Framework for Regularized Loss Minimization

In modern large-scale machine learning applications, the training data a...
research
11/19/2020

Anderson acceleration of coordinate descent

Acceleration of first order methods is mainly obtained via inertial tech...
research
12/25/2017

A Random Block-Coordinate Douglas-Rachford Splitting Method with Low Computational Complexity for Binary Logistic Regression

In this paper, we propose a new optimization algorithm for sparse logist...
research
10/15/2017

Accelerated Block Coordinate Proximal Gradients with Applications in High Dimensional Statistics

Nonconvex optimization problems arise in different research fields and a...

Please sign up or login with your details

Forgot password? Click here to reset