Decentralized Multi-Agent Reinforcement Learning for Continuous-Space Stochastic Games

03/16/2023
by   Awni Altabaa, et al.
0

Stochastic games are a popular framework for studying multi-agent reinforcement learning (MARL). Recent advances in MARL have focused primarily on games with finitely many states. In this work, we study multi-agent learning in stochastic games with general state spaces and an information structure in which agents do not observe each other's actions. In this context, we propose a decentralized MARL algorithm and we prove the near-optimality of its policy updates. Furthermore, we study the global policy-updating dynamics for a general class of best-reply based algorithms and derive a closed-form characterization of convergence probabilities over the joint policy space.

READ FULL TEXT
research
02/02/2023

Best Possible Q-Learning

Fully decentralized learning, where the global information, i.e., the ac...
research
04/04/2023

Off-Policy Action Anticipation in Multi-Agent Reinforcement Learning

Learning anticipation in Multi-Agent Reinforcement Learning (MARL) is a ...
research
08/13/2018

On Passivity, Reinforcement Learning and Higher-Order Learning in Multi-Agent Finite Games

In this paper, we propose a passivity-based methodology for analysis and...
research
09/28/2020

Agent Environment Cycle Games

Partially Observable Stochastic Games (POSGs), are the most general mode...
research
04/07/2013

A General Framework for Interacting Bayes-Optimally with Self-Interested Agents using Arbitrary Parametric Model and Model Prior

Recent advances in Bayesian reinforcement learning (BRL) have shown that...
research
08/07/2023

Asynchronous Decentralized Q-Learning: Two Timescale Analysis By Persistence

Non-stationarity is a fundamental challenge in multi-agent reinforcement...
research
06/08/2022

Learning in games from a stochastic approximation viewpoint

We develop a unified stochastic approximation framework for analyzing th...

Please sign up or login with your details

Forgot password? Click here to reset