Almost Optimal Algorithms for Two-player Markov Games with Linear Function Approximation

02/15/2021
by   Zixiang Chen, et al.
4

We study reinforcement learning for two-player zero-sum Markov games with simultaneous moves in the finite-horizon setting, where the transition kernel of the underlying Markov games can be parameterized by a linear function over the current state, both players' actions and the next state. In particular, we assume that we can control both players and aim to find the Nash Equilibrium by minimizing the duality gap. We propose an algorithm Nash-UCRL-VTR based on the principle "Optimism-in-Face-of-Uncertainty". Our algorithm only needs to find a Coarse Correlated Equilibrium (CCE), which is computationally very efficient. Specifically, we show that Nash-UCRL-VTR can provably achieve an Õ(dH√(T)) regret, where d is the linear function dimension, H is the length of the game and T is the total number of steps in the game. To access the optimality of our algorithm, we also prove an Ω̃( dH√(T)) lower bound on the regret. Our upper bound matches the lower bound up to logarithmic factors, which suggests the optimality of our algorithm.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset