-
Convergence Behaviour of Some Gradient-Based Methods on Bilinear Games
Min-max optimization has attracted much attention in the machine learnin...
read it
-
Complex Momentum for Learning in Games
We generalize gradient descent with momentum for learning in differentia...
read it
-
Stochastic Hamiltonian Gradient Methods for Smooth Games
The success of adversarial formulations in machine learning has brought ...
read it
-
Stochastic Primal-Dual Algorithms with Faster Convergence than O(1/√(T)) for Problems without Bilinear Structure
Previous studies on stochastic primal-dual algorithms for solving min-ma...
read it
-
LEAD: Least-Action Dynamics for Min-Max Optimization
Adversarial formulations such as generative adversarial networks (GANs) ...
read it
-
Average-case Acceleration for Bilinear Games and Normal Matrices
Advances in generative modeling and adversarial learning have given rise...
read it
-
Adaptive extra-gradient methods for min-max optimization and games
We present a new family of min-max optimization algorithms that automati...
read it
Convergence Behaviour of Some Gradient-Based Methods on Bilinear Zero-Sum Games
Min-max formulations have attracted great attention in the ML community due to the rise of deep generative models and adversarial methods, and understanding the dynamics of (stochastic) gradient algorithms for solving such formulations has been a grand challenge. As a first step, we restrict ourselves to bilinear zero-sum games and give a systematic analysis of popular gradient updates, for both simultaneous and alternating versions. We provide exact conditions for their convergence and find the optimal parameter setup and convergence rates. In particular, our results offer formal evidence that alternating updates converge "better" than simultaneous ones.
READ FULL TEXT
Comments
There are no comments yet.