A Stochastic Proximal Point Algorithm for Saddle-Point Problems

by   Luo Luo, et al.

We consider saddle point problems which objective functions are the average of n strongly convex-concave individual components. Recently, researchers exploit variance reduction methods to solve such problems and achieve linear-convergence guarantees. However, these methods have a slow convergence when the condition number of the problem is very large. In this paper, we propose a stochastic proximal point algorithm, which accelerates the variance reduction method SAGA for saddle point problems. Compared with the catalyst framework, our algorithm reduces a logarithmic term of condition number for the iteration complexity. We adopt our algorithm to policy evaluation and the empirical results show that our method is much more efficient than state-of-the-art methods.


page 1

page 2

page 3

page 4


A Semismooth Newton Stochastic Proximal Point Algorithm with Variance Reduction

We develop an implementable stochastic proximal point (SPP) method for a...

A Proximal Stochastic Gradient Method with Progressive Variance Reduction

We consider the problem of minimizing the sum of two convex functions: o...

RECAPP: Crafting a More Efficient Catalyst for Convex Optimization

The accelerated proximal point algorithm (APPA), also known as "Catalyst...

Variance-Reduced Proximal and Splitting Schemes for Monotone Stochastic Generalized Equations

We consider monotone inclusion problems where the operators may be expec...

Robust stochastic optimization with the proximal point method

Standard results in stochastic convex optimization bound the number of s...

Un-regularizing: approximate proximal point and faster stochastic algorithms for empirical risk minimization

We develop a family of accelerated stochastic algorithms that minimize s...

A single-phase, proximal path-following framework

We propose a new proximal, path-following framework for a class of const...