Towards General Function Approximation in Zero-Sum Markov Games

07/30/2021
by   Baihe Huang, et al.
0

This paper considers two-player zero-sum finite-horizon Markov games with simultaneous moves. The study focuses on the challenging settings where the value function or the model is parameterized by general function classes. Provably efficient algorithms for both decoupled and coordinated settings are developed. In the decoupled setting where the agent controls a single player and plays against an arbitrary opponent, we propose a new model-free algorithm. The sample complexity is governed by the Minimax Eluder dimension – a new dimension of the function class in Markov games. As a special case, this method improves the state-of-the-art algorithm by a √(d) factor in the regret when the reward function and transition kernel are parameterized with d-dimensional linear features. In the coordinated setting where both players are controlled by the agent, we propose a model-based algorithm and a model-free algorithm. In the model-based algorithm, we prove that sample complexity can be bounded by a generalization of Witness rank to Markov games. The model-free algorithm enjoys a √(K)-regret upper bound where K is the number of episodes.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset