Adaptive Stochastic Primal-Dual Coordinate Descent for Separable Saddle Point Problems

06/12/2015
by   Zhanxing Zhu, et al.
0

We consider a generic convex-concave saddle point problem with separable structure, a form that covers a wide-ranged machine learning applications. Under this problem structure, we follow the framework of primal-dual updates for saddle point problems, and incorporate stochastic block coordinate descent with adaptive stepsize into this framework. We theoretically show that our proposal of adaptive stepsize potentially achieves a sharper linear convergence rate compared with the existing methods. Additionally, since we can select "mini-batch" of block coordinates to update, our method is also amenable to parallel processing for large-scale data. We apply the proposed method to regularized empirical risk minimization and show that it performs comparably or, more often, better than state-of-the-art methods on both synthetic and real-world data sets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/23/2015

Stochastic Parallel Block Coordinate Descent for Large-scale Saddle Point Problems

We consider convex-concave saddle point problems with a separable struct...
research
08/14/2015

Doubly Stochastic Primal-Dual Coordinate Method for Bilinear Saddle-Point Problem

We propose a doubly stochastic primal-dual coordinate optimization algor...
research
05/12/2013

Accelerated Mini-Batch Stochastic Dual Coordinate Ascent

Stochastic dual coordinate ascent (SDCA) is an effective technique for s...
research
02/27/2015

Stochastic Dual Coordinate Ascent with Adaptive Probabilities

This paper introduces AdaSDCA: an adaptive variant of stochastic dual co...
research
07/31/2023

Block-Coordinate Methods and Restarting for Solving Extensive-Form Games

Coordinate descent methods are popular in machine learning and optimizat...
research
03/07/2017

Faster Coordinate Descent via Adaptive Importance Sampling

Coordinate descent methods employ random partial updates of decision var...
research
10/28/2021

Decentralized Feature-Distributed Optimization for Generalized Linear Models

We consider the "all-for-one" decentralized learning problem for general...

Please sign up or login with your details

Forgot password? Click here to reset