On the convergence of policy gradient methods to Nash equilibria in general stochastic games

10/17/2022
by   Angeliki Giannou, et al.
0

Learning in stochastic games is a notoriously difficult problem because, in addition to each other's strategic decisions, the players must also contend with the fact that the game itself evolves over time, possibly in a very complicated manner. Because of this, the convergence properties of popular learning algorithms - like policy gradient and its variants - are poorly understood, except in specific classes of games (such as potential or two-player, zero-sum games). In view of this, we examine the long-run behavior of policy gradient methods with respect to Nash equilibrium policies that are second-order stationary (SOS) in a sense similar to the type of sufficiency conditions used in optimization. Our first result is that SOS policies are locally attracting with high probability, and we show that policy gradient trajectories with gradient estimates provided by the REINFORCE algorithm achieve an 𝒪(1/√(n)) distance-squared convergence rate if the method's step-size is chosen appropriately. Subsequently, specializing to the class of deterministic Nash policies, we show that this rate can be improved dramatically and, in fact, policy gradient methods converge within a finite number of iterations in that case.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/27/2021

Policy Gradient Methods Find the Nash Equilibrium in N-player General-sum Linear-quadratic Games

We consider a general-sum N-player linear-quadratic game with stochastic...
research
02/08/2022

Independent Policy Gradient for Large-Scale Markov Potential Games: Sharper Rates, Function Approximation, and Game-Agnostic Convergence

We examine global non-asymptotic convergence properties of policy gradie...
research
01/11/2021

Independent Policy Gradient Methods for Competitive Reinforcement Learning

We obtain global, non-asymptotic convergence guarantees for independent ...
research
02/17/2021

Provably Efficient Policy Gradient Methods for Two-Player Zero-Sum Markov Games

Policy gradient methods are widely used in solving two-player zero-sum g...
research
06/15/2022

Convergence and Price of Anarchy Guarantees of the Softmax Policy Gradient in Markov Potential Games

We study the performance of policy gradient methods for the subclass of ...
research
05/24/2023

Adaptive Policy Learning to Additional Tasks

This paper develops a policy learning method for tuning a pre-trained po...
research
04/26/2021

Computational Performance of Deep Reinforcement Learning to find Nash Equilibria

We test the performance of deep deterministic policy gradient (DDPG), a ...

Please sign up or login with your details

Forgot password? Click here to reset