DeepAI AI Chat
Log In Sign Up

Stochastic Hamiltonian Gradient Methods for Smooth Games

by   Nicolas Loizou, et al.

The success of adversarial formulations in machine learning has brought renewed motivation for smooth games. In this work, we focus on the class of stochastic Hamiltonian methods and provide the first convergence guarantees for certain classes of stochastic smooth games. We propose a novel unbiased estimator for the stochastic Hamiltonian gradient descent (SHGD) and highlight its benefits. Using tools from the optimization literature we show that SHGD converges linearly to the neighbourhood of a stationary point. To guarantee convergence to the exact solution, we analyze SHGD with a decreasing step-size and we also present the first stochastic variance reduced Hamiltonian method. Our results provide the first global non-asymptotic last-iterate convergence guarantees for the class of stochastic unconstrained bilinear games and for the more general class of stochastic games that satisfy a "sufficiently bilinear" condition, notably including some non-convex non-concave problems. We supplement our analysis with experiments on stochastic bilinear and sufficiently bilinear games, where our theory is shown to be tight, and on simple adversarial machine learning formulations.


page 1

page 2

page 3

page 4


Optimistic mirror descent in saddle-point problems: Going the extra (gradient) mile

Owing to their connection with generative adversarial networks (GANs), s...

Average-case Acceleration for Bilinear Games and Normal Matrices

Advances in generative modeling and adversarial learning have given rise...

A Tight and Unified Analysis of Extragradient for a Whole Spectrum of Differentiable Games

We consider differentiable games: multi-objective minimization problems,...

Convergence Behaviour of Some Gradient-Based Methods on Bilinear Zero-Sum Games

Min-max formulations have attracted great attention in the ML community ...

Linear Last-iterate Convergence for Matrix Games and Stochastic Games

Optimistic Gradient Descent Ascent (OGDA) algorithm for saddle-point opt...

EigenGame Unloaded: When playing games is better than optimizing

We build on the recently proposed EigenGame that views eigendecompositio...