Towards General Function Approximation in Zero-Sum Markov Games

07/30/2021
by   Baihe Huang, et al.
0

This paper considers two-player zero-sum finite-horizon Markov games with simultaneous moves. The study focuses on the challenging settings where the value function or the model is parameterized by general function classes. Provably efficient algorithms for both decoupled and coordinated settings are developed. In the decoupled setting where the agent controls a single player and plays against an arbitrary opponent, we propose a new model-free algorithm. The sample complexity is governed by the Minimax Eluder dimension – a new dimension of the function class in Markov games. As a special case, this method improves the state-of-the-art algorithm by a √(d) factor in the regret when the reward function and transition kernel are parameterized with d-dimensional linear features. In the coordinated setting where both players are controlled by the agent, we propose a model-based algorithm and a model-free algorithm. In the model-based algorithm, we prove that sample complexity can be bounded by a generalization of Witness rank to Markov games. The model-free algorithm enjoys a √(K)-regret upper bound where K is the number of episodes.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/17/2023

Model-Free Algorithm with Improved Sample Efficiency for Zero-Sum Markov Games

The problem of two-player zero-sum Markov games has recently attracted i...
research
02/15/2021

Almost Optimal Algorithms for Two-player Markov Games with Linear Function Approximation

We study reinforcement learning for two-player zero-sum Markov games wit...
research
10/30/2022

Representation Learning for General-sum Low-rank Markov Games

We study multi-agent general-sum Markov games with nonlinear function ap...
research
06/08/2022

Model-Based Reinforcement Learning Is Minimax-Optimal for Offline Zero-Sum Markov Games

This paper makes progress towards learning Nash equilibria in two-player...
research
08/10/2022

Learning Two-Player Mixture Markov Games: Kernel Function Approximation and Correlated Equilibrium

We consider learning Nash equilibria in two-player zero-sum Markov Games...
research
01/13/2023

Decentralized model-free reinforcement learning in stochastic games with average-reward objective

We propose the first model-free algorithm that achieves low regret perfo...
research
06/01/2022

Provably Efficient Offline Multi-agent Reinforcement Learning via Strategy-wise Bonus

This paper considers offline multi-agent reinforcement learning. We prop...

Please sign up or login with your details

Forgot password? Click here to reset