Forward Looking Best-Response Multiplicative Weights Update Methods

06/07/2021 ∙ by Michail Fasoulakis, et al. ∙ 0

We propose a novel variant of the multiplicative weights update method with forward-looking best-response strategies, that guarantees last-iterate convergence for zero-sum games with a unique Nash equilibrium. Particularly, we show that the proposed algorithm converges to an η^1/ρ-approximate Nash equilibrium, with ρ > 1, by decreasing the Kullback-Leibler divergence of each iterate by a rate of at least Ω(η^1+1/ρ), for sufficiently small learning rate η. When our method enters a sufficiently small neighborhood of the solution, it becomes a contraction and converges to the Nash equilibrium of the game. Furthermore, we perform an experimental comparison with the recently proposed optimistic variant of the multiplicative weights update method, by <cit.>, which has also been proved to attain last-iterate convergence. Our findings reveal that our algorithm offers substantial gains both in terms of the convergence rate and the region of contraction relative to the previous approach.



There are no comments yet.


page 15

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.