Log In Sign Up

On the Rate of Convergence of Payoff-based Algorithms to Nash Equilibrium in Strongly Monotone Games

by   Tatiana Tatarenko, et al.

We derive the rate of convergence to Nash equilibria for the payoff-based algorithm proposed in <cit.>. These rates are achieved under the standard assumption of convexity of the game, strong monotonicity and differentiability of the pseudo-gradient. In particular, we show the algorithm achieves O(1/T) in the two-point function evaluating setting and O(1/√(T)) in the one-point function evaluation under additional requirement of Lipschitz continuity of the pseudo-gradient. These rates are to our knowledge the best known rates for the corresponding problem classes.


page 1

page 2

page 3

page 4


Doubly Optimal No-Regret Learning in Monotone Games

We consider online learning in multi-player smooth monotone games. Exist...

Fully distributed Nash equilibrium seeking over time-varying communication networks with linear convergence rate

We design a distributed algorithm for learning Nash equilibria over time...

Decentralized Policy Gradient for Nash Equilibria Learning of General-sum Stochastic Games

We study Nash equilibria learning of a general-sum stochastic game with ...

Nash equilibrium seeking under partial-decision information over directed communication networks

We consider the Nash equilibrium problem in a partial-decision informati...

Global Nash Equilibrium in Non-convex Multi-player Game: Theory and Algorithms

Wide machine learning tasks can be formulated as non-convex multi-player...

Continuous-time Discounted Mirror-Descent Dynamics in Monotone Concave Games

In this paper, we consider concave continuous-kernel games characterized...