Convergence Behaviour of Some Gradient-Based Methods on Bilinear Zero-Sum Games

08/15/2019
by   Guojun Zhang, et al.
0

Min-max formulations have attracted great attention in the ML community due to the rise of deep generative models and adversarial methods, and understanding the dynamics of (stochastic) gradient algorithms for solving such formulations has been a grand challenge. As a first step, we restrict ourselves to bilinear zero-sum games and give a systematic analysis of popular gradient updates, for both simultaneous and alternating versions. We provide exact conditions for their convergence and find the optimal parameter setup and convergence rates. In particular, our results offer formal evidence that alternating updates converge "better" than simultaneous ones.

READ FULL TEXT
research
08/15/2019

Convergence Behaviour of Some Gradient-Based Methods on Bilinear Games

Min-max optimization has attracted much attention in the machine learnin...
research
02/16/2021

Complex Momentum for Learning in Games

We generalize gradient descent with momentum for learning in differentia...
research
07/08/2020

Stochastic Hamiltonian Gradient Methods for Smooth Games

The success of adversarial formulations in machine learning has brought ...
research
06/08/2022

Alternating Mirror Descent for Constrained Min-Max Games

In this paper we study two-player bilinear zero-sum games with constrain...
research
02/18/2021

Don't Fix What ain't Broke: Near-optimal Local Convergence of Alternating Gradient Descent-Ascent for Minimax Optimization

Minimax optimization has recently gained a lot of attention as adversari...
research
05/27/2022

Competitive Gradient Optimization

We study the problem of convergence to a stationary point in zero-sum ga...
research
10/05/2020

Average-case Acceleration for Bilinear Games and Normal Matrices

Advances in generative modeling and adversarial learning have given rise...

Please sign up or login with your details

Forgot password? Click here to reset