Policy Optimization for Markov Games: Unified Framework and Faster Convergence

06/06/2022
by   Runyu Zhang, et al.
2

This paper studies policy optimization algorithms for multi-agent reinforcement learning. We begin by proposing an algorithm framework for two-player zero-sum Markov Games in the full-information setting, where each iteration consists of a policy update step at each state using a certain matrix game algorithm, and a value update step with a certain learning rate. This framework unifies many existing and new policy optimization algorithms. We show that the state-wise average policy of this algorithm converges to an approximate Nash equilibrium (NE) of the game, as long as the matrix game algorithms achieve low weighted regret at each state, with respect to weights determined by the speed of the value updates. Next, we show that this framework instantiated with the Optimistic Follow-The-Regularized-Leader (OFTRL) algorithm at each state (and smooth value updates) can find an 𝒪(T^-5/6) approximate NE in T iterations, and a similar algorithm with slightly modified value update rule achieves a faster 𝒪(T^-1) convergence rate. These improve over the current best 𝒪(T^-1/2) rate of symmetric policy optimization type algorithms. We also extend this algorithm to multi-player general-sum Markov Games and show an 𝒪(T^-3/4) convergence rate to Coarse Correlated Equilibria (CCE). Finally, we provide a numerical example to verify our theory and investigate the importance of smooth value updates, and find that using "eager" value updates instead (equivalent to the independent natural policy gradient algorithm) may significantly slow down the convergence, even on a simple game with H=2 layers.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/03/2022

Faster Last-iterate Convergence of Policy Optimization in Zero-Sum Markov Games

Multi-Agent Reinforcement Learning (MARL) – where multiple agents learn ...
research
08/15/2023

Near-Optimal Last-iterate Convergence of Policy Optimization in Zero-sum Polymatrix Markov games

Computing approximate Nash equilibria in multi-player general-sum Markov...
research
06/01/2019

Neural Replicator Dynamics

In multiagent learning, agents interact in inherently nonstationary envi...
research
06/04/2021

Consensus Multiplicative Weights Update: Learning to Learn using Projector-based Game Signatures

Recently, Optimistic Multiplicative Weights Update (OMWU) was proven to ...
research
09/26/2022

O(T^-1) Convergence of Optimistic-Follow-the-Regularized-Leader in Two-Player Zero-Sum Markov Games

We prove that optimistic-follow-the-regularized-leader (OFTRL), together...
research
06/19/2022

The Power of Regularization in Solving Extensive-Form Games

In this paper, we investigate the power of regularization, a common tech...
research
02/17/2021

Provably Efficient Policy Gradient Methods for Two-Player Zero-Sum Markov Games

Policy gradient methods are widely used in solving two-player zero-sum g...

Please sign up or login with your details

Forgot password? Click here to reset