When Can We Learn General-Sum Markov Games with a Large Number of Players Sample-Efficiently?

10/08/2021
by   Ziang Song, et al.
3

Multi-agent reinforcement learning has made substantial empirical progresses in solving games with a large number of players. However, theoretically, the best known sample complexity for finding a Nash equilibrium in general-sum games scales exponentially in the number of players due to the size of the joint action space, and there is a matching exponential lower bound. This paper investigates what learning goals admit better sample complexities in the setting of m-player general-sum Markov games with H steps, S states, and A_i actions per player. First, we design algorithms for learning an ϵ-Coarse Correlated Equilibrium (CCE) in 𝒪(H^5Smax_i≤ m A_i / ϵ^2) episodes, and an ϵ-Correlated Equilibrium (CE) in 𝒪(H^6Smax_i≤ m A_i^2 / ϵ^2) episodes. This is the first line of results for learning CCE and CE with sample complexities polynomial in max_i≤ m A_i. Our algorithm for learning CE integrates an adversarial bandit subroutine which minimizes a weighted swap regret, along with several novel designs in the outer loop. Second, we consider the important special case of Markov Potential Games, and design an algorithm that learns an ϵ-approximate Nash equilibrium within 𝒪(S∑_i≤ m A_i / ϵ^3) episodes (when only highlighting the dependence on S, A_i, and ϵ), which only depends linearly in ∑_i≤ m A_i and significantly improves over the best known algorithm in the ϵ dependence. Overall, our results shed light on what equilibria or structural assumptions on the game may enable sample-efficient learning with many players.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset