O(1/T) Time-Average Convergence in a Generalization of Multiagent Zero-Sum Games
We introduce a generalization of zero-sum network multiagent matrix games and prove that alternating gradient descent converges to the set of Nash equilibria at rate O(1/T) for this set of games. Alternating gradient descent obtains this convergence guarantee while using fixed learning rates that are four times larger than the optimistic variant of gradient descent. Experimentally, we show with 97.5 time-averaged strategies that are 2.585 times closer to the set of Nash equilibria than optimistic gradient descent.
READ FULL TEXT