-
Multi-Agent Decentralized Belief Propagation on Graphs
We consider the problem of interactive partially observable Markov decis...
read it
-
Value Propagation for Decentralized Networked Deep Multi-agent Reinforcement Learning
We consider the networked multi-agent reinforcement learning (MARL) prob...
read it
-
Finite-Sample Analyses for Fully Decentralized Multi-Agent Reinforcement Learning
Despite the increasing interest in multi-agent reinforcement learning (M...
read it
-
BayGo: Joint Bayesian Learning and Information-Aware Graph Optimization
This article deals with the problem of distributed machine learning, in ...
read it
-
Finite-Time Analysis of Decentralized Stochastic Approximation with Applications in Multi-Agent and Multi-Task Learning
Stochastic approximation, a data-driven approach for finding the fixed p...
read it
-
On the Convergence of Consensus Algorithms with Markovian Noise and Gradient Bias
This paper presents a finite time convergence analysis for a decentraliz...
read it
-
Walkman: A Communication-Efficient Random-Walk Algorithm for Decentralized Optimization
This paper addresses consensus optimization problems in a multi-agent ne...
read it
Fully Decentralized Multi-Agent Reinforcement Learning with Networked Agents
We consider the problem of fully decentralized multi-agent reinforcement learning (MARL), where the agents are located at the nodes of a time-varying communication network. Specifically, we assume that the reward functions of the agents might correspond to different tasks, and are only known to the corresponding agent. Moreover, each agent makes individual decisions based on both the information observed locally and the messages received from its neighbors over the network. Within this setting, the collective goal of the agents is to maximize the globally averaged return over the network through exchanging information with their neighbors. To this end, we propose two decentralized actor-critic algorithms with function approximation, which are applicable to large-scale MARL problems where both the number of states and the number of agents are massively large. Under the decentralized structure, the actor step is performed individually by each agent with no need to infer the policies of others. For the critic step, we propose a consensus update via communication over the network. Our algorithms are fully incremental and can be implemented in an online fashion. Convergence analyses of the algorithms are provided when the value functions are approximated within the class of linear functions. Extensive simulation results with both linear and nonlinear function approximations are presented to validate the proposed algorithms. Our work appears to be the first study of fully decentralized MARL algorithms for networked agents with function approximation, with provable convergence guarantees.
READ FULL TEXT
Comments
There are no comments yet.