Exploiting Fast Decaying and Locality in Multi-Agent MDP with Tree Dependence Structure

09/15/2019
by   Guannan Qu, et al.
0

This paper considers a multi-agent Markov Decision Process (MDP), where there are n agents and each agent i is associated with a state s_i and action a_i taking values from a finite set. Though the global state space size and action space size are exponential in n, we impose local dependence structures and focus on local policies that only depend on local states, and we propose a method that finds nearly optimal local policies in polynomial time (in n) when the dependence structure is a one directional tree. The algorithm builds on approximated reward functions which are evaluated using locally truncated Markov process. Further, under some special conditions, we prove that the gap between the approximated reward function and the true reward function is decaying exponentially fast as the length of the truncated Markov process gets longer. The intuition behind this is that under some assumptions, the effect of agent interactions decays exponentially in the distance between agents, which we term "fast decaying property".

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/11/2020

Scalable Multi-Agent Reinforcement Learning for Networked Systems with Average Reward

It has long been recognized that multi-agent reinforcement learning (MAR...
research
03/29/2021

Scalable Planning in Multi-Agent MDPs

Multi-agent Markov Decision Processes (MMDPs) arise in a variety of appl...
research
01/25/2020

Learning Non-Markovian Reward Models in MDPs

There are situations in which an agent should receive rewards only after...
research
09/30/2021

Decentralized Graph-Based Multi-Agent Reinforcement Learning Using Reward Machines

In multi-agent reinforcement learning (MARL), it is challenging for a co...
research
12/07/2016

Effect of Reward Function Choices in MDPs with Value-at-Risk

This paper studies Value-at-Risk (VaR) problems in short- and long-horiz...
research
06/01/2021

Gradient Play in Multi-Agent Markov Stochastic Games: Stationary Points and Convergence

We study the performance of the gradient play algorithm for multi-agent ...
research
08/01/2018

Robbins-Mobro conditions for persistent exploration learning strategies

We formulate simple assumptions, implying the Robbins-Monro conditions f...

Please sign up or login with your details

Forgot password? Click here to reset