
MultiAgent Decentralized Belief Propagation on Graphs
We consider the problem of interactive partially observable Markov decis...
read it

Value Propagation for Decentralized Networked Deep Multiagent Reinforcement Learning
We consider the networked multiagent reinforcement learning (MARL) prob...
read it

FiniteSample Analyses for Fully Decentralized MultiAgent Reinforcement Learning
Despite the increasing interest in multiagent reinforcement learning (M...
read it

BayGo: Joint Bayesian Learning and InformationAware Graph Optimization
This article deals with the problem of distributed machine learning, in ...
read it

FiniteTime Analysis of Decentralized Stochastic Approximation with Applications in MultiAgent and MultiTask Learning
Stochastic approximation, a datadriven approach for finding the fixed p...
read it

On the Convergence of Consensus Algorithms with Markovian Noise and Gradient Bias
This paper presents a finite time convergence analysis for a decentraliz...
read it

Walkman: A CommunicationEfficient RandomWalk Algorithm for Decentralized Optimization
This paper addresses consensus optimization problems in a multiagent ne...
read it
Fully Decentralized MultiAgent Reinforcement Learning with Networked Agents
We consider the problem of fully decentralized multiagent reinforcement learning (MARL), where the agents are located at the nodes of a timevarying communication network. Specifically, we assume that the reward functions of the agents might correspond to different tasks, and are only known to the corresponding agent. Moreover, each agent makes individual decisions based on both the information observed locally and the messages received from its neighbors over the network. Within this setting, the collective goal of the agents is to maximize the globally averaged return over the network through exchanging information with their neighbors. To this end, we propose two decentralized actorcritic algorithms with function approximation, which are applicable to largescale MARL problems where both the number of states and the number of agents are massively large. Under the decentralized structure, the actor step is performed individually by each agent with no need to infer the policies of others. For the critic step, we propose a consensus update via communication over the network. Our algorithms are fully incremental and can be implemented in an online fashion. Convergence analyses of the algorithms are provided when the value functions are approximated within the class of linear functions. Extensive simulation results with both linear and nonlinear function approximations are presented to validate the proposed algorithms. Our work appears to be the first study of fully decentralized MARL algorithms for networked agents with function approximation, with provable convergence guarantees.
READ FULL TEXT
Comments
There are no comments yet.