Efficiently Computing Nash Equilibria in Adversarial Team Markov Games

08/03/2022
by   Fivos Kalogiannis, et al.
0

Computing Nash equilibrium policies is a central problem in multi-agent reinforcement learning that has received extensive attention both in theory and in practice. However, provable guarantees have been thus far either limited to fully competitive or cooperative scenarios or impose strong assumptions that are difficult to meet in most practical applications. In this work, we depart from those prior results by investigating infinite-horizon adversarial team Markov games, a natural and well-motivated class of games in which a team of identically-interested players – in the absence of any explicit coordination or communication – is competing against an adversarial player. This setting allows for a unifying treatment of zero-sum Markov games and Markov potential games, and serves as a step to model more realistic strategic interactions that feature both competing and cooperative interests. Our main contribution is the first algorithm for computing stationary ϵ-approximate Nash equilibria in adversarial team Markov games with computational complexity that is polynomial in all the natural parameters of the game, as well as 1/ϵ. The proposed algorithm is particularly natural and practical, and it is based on performing independent policy gradient steps for each player in the team, in tandem with best responses from the side of the adversary; in turn, the policy for the adversary is then obtained by solving a carefully constructed linear program. Our analysis leverages non-standard techniques to establish the KKT optimality conditions for a nonlinear program with nonconvex constraints, thereby leading to a natural interpretation of the induced Lagrange multipliers. Along the way, we significantly extend an important characterization of optimal policies in adversarial (normal-form) team games due to Von Stengel and Koller (GEB `97).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/05/2023

Algorithms and Complexity for Computing Nash Equilibria in Adversarial Team Games

Adversarial team games model multiplayer strategic interactions in which...
research
11/07/2021

Teamwork makes von Neumann work: Min-Max Optimization in Two-Team Zero-Sum Games

Motivated by recent advances in both theoretical and applied aspects of ...
research
09/26/2020

Computing Ex Ante Coordinated Team-Maxmin Equilibria in Zero-Sum Multiplayer Extensive-Form Games

Computational game theory has many applications in the modern world in b...
research
08/22/2022

Minimax-Optimal Multi-Agent RL in Markov Games With a Generative Model

This paper studies multi-agent reinforcement learning in Markov games, w...
research
06/03/2021

Global Convergence of Multi-Agent Policy Gradient in Markov Potential Games

Potential games are arguably one of the most important and widely studie...
research
05/16/2021

Optimal control of robust team stochastic games

In stochastic dynamic environments, team stochastic games have emerged a...
research
12/17/2019

Controlling network coordination games

We study a novel control problem in the context of network coordination ...

Please sign up or login with your details

Forgot password? Click here to reset