Model-Based Multi-Agent RL in Zero-Sum Markov Games with Near-Optimal Sample Complexity

07/15/2020
by   Kaiqing Zhang, et al.
27

Model-based reinforcement learning (RL), which finds an optimal policy using an empirical model, has long been recognized as one of the corner stones of RL. It is especially suitable for multi-agent RL (MARL), as it naturally decouples the learning and the planning phases, and avoids the non-stationarity problem when all agents are improving their policies simultaneously using samples. Though intuitive and widely-used, the sample complexity of model-based MARL algorithms has not been fully investigated. In this paper, our goal is to address the fundamental question about its sample complexity. We study arguably the most basic MARL setting: two-player discounted zero-sum Markov games, given only access to a generative model. We show that model-based MARL achieves a sample complexity of Õ(|S||A||B|(1-γ)^-3ϵ^-2) for finding the Nash equilibrium (NE) value up to some ϵ error, and the ϵ-NE policies with a smooth planning oracle, where γ is the discount factor, and S,A,B denote the state space, and the action spaces for the two agents. We further show that such a sample bound is minimax-optimal (up to logarithmic factors) if the algorithm is reward-agnostic, where the algorithm queries state transition samples without reward knowledge, by establishing a matching lower bound. This is in contrast to the usual reward-aware setting, with a Ω̃(|S|(|A|+|B|)(1-γ)^-3ϵ^-2) lower bound, where this model-based approach is near-optimal with only a gap on the |A|,|B| dependence. Our results not only demonstrate the sample-efficiency of this basic model-based approach in MARL, but also elaborate on the fundamental tradeoff between its power (easily handling the more challenging reward-agnostic case) and limitation (less adaptive and suboptimal in |A|,|B|), particularly arises in the multi-agent context.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/28/2021

Sample-Efficient Reinforcement Learning for Linearly-Parameterized MDPs with a Generative Model

The curse of dimensionality is a widely known issue in reinforcement lea...
research
10/04/2020

A Sharp Analysis of Model-based Reinforcement Learning with Self-Play

Model-based algorithms—algorithms that decouple learning of the model an...
research
08/17/2023

Model-Free Algorithm with Improved Sample Efficiency for Zero-Sum Markov Games

The problem of two-player zero-sum Markov games has recently attracted i...
research
09/08/2023

Learning Zero-Sum Linear Quadratic Games with Improved Sample Complexity

Zero-sum Linear Quadratic (LQ) games are fundamental in optimal control ...
research
07/01/2019

On mechanisms for transfer using landmark value functions in multi-task lifelong reinforcement learning

Transfer learning across different reinforcement learning (RL) tasks is ...
research
11/01/2019

Explicit Explore-Exploit Algorithms in Continuous State Spaces

We present a new model-based algorithm for reinforcement learning (RL) w...
research
06/07/2021

The Power of Exploiter: Provable Multi-Agent RL in Large State Spaces

Modern reinforcement learning (RL) commonly engages practical problems w...

Please sign up or login with your details

Forgot password? Click here to reset