DeepAI
Log In Sign Up

On the Rate of Convergence of Payoff-based Algorithms to Nash Equilibrium in Strongly Monotone Games

02/22/2022
by   Tatiana Tatarenko, et al.
0

We derive the rate of convergence to Nash equilibria for the payoff-based algorithm proposed in <cit.>. These rates are achieved under the standard assumption of convexity of the game, strong monotonicity and differentiability of the pseudo-gradient. In particular, we show the algorithm achieves O(1/T) in the two-point function evaluating setting and O(1/√(T)) in the one-point function evaluation under additional requirement of Lipschitz continuity of the pseudo-gradient. These rates are to our knowledge the best known rates for the corresponding problem classes.

READ FULL TEXT

page 1

page 2

page 3

page 4

01/30/2023

Doubly Optimal No-Regret Learning in Monotone Games

We consider online learning in multi-player smooth monotone games. Exist...
03/22/2020

Fully distributed Nash equilibrium seeking over time-varying communication networks with linear convergence rate

We design a distributed algorithm for learning Nash equilibria over time...
10/14/2022

Decentralized Policy Gradient for Nash Equilibria Learning of General-sum Stochastic Games

We study Nash equilibria learning of a general-sum stochastic game with ...
09/10/2020

Nash equilibrium seeking under partial-decision information over directed communication networks

We consider the Nash equilibrium problem in a partial-decision informati...
01/19/2023

Global Nash Equilibrium in Non-convex Multi-player Game: Theory and Algorithms

Wide machine learning tasks can be formulated as non-convex multi-player...
12/07/2019

Continuous-time Discounted Mirror-Descent Dynamics in Monotone Concave Games

In this paper, we consider concave continuous-kernel games characterized...