-
Value Propagation for Decentralized Networked Deep Multi-agent Reinforcement Learning
We consider the networked multi-agent reinforcement learning (MARL) prob...
read it
-
Finite-Sample Analyses for Fully Decentralized Multi-Agent Reinforcement Learning
Despite the increasing interest in multi-agent reinforcement learning (M...
read it
-
Finite-Time Performance of Distributed Temporal Difference Learning with Linear Function Approximation
We study the policy evaluation problem in multi-agent reinforcement lear...
read it
-
Distributed Value Function Approximation for Collaborative Multi-Agent Reinforcement Learning
In this paper we propose novel distributed gradient-based temporal diffe...
read it
-
Multi-Agent Reinforcement Learning via Double Averaging Primal-Dual Optimization
Despite the success of single-agent reinforcement learning, multi-agent ...
read it
-
Finite Sample Analysis of LSTD with Random Projections and Eligibility Traces
Policy evaluation with linear function approximation is an important pro...
read it
-
Temporal Difference Learning with Neural Networks - Study of the Leakage Propagation Problem
Temporal-Difference learning (TD) [Sutton, 1988] with function approxima...
read it
Finite-Sample Analysis of Decentralized Temporal-Difference Learning with Linear Function Approximation
Motivated by the emerging use of multi-agent reinforcement learning (MARL) in engineering applications such as networked robotics, swarming drones, and sensor networks, we investigate the policy evaluation problem in a fully decentralized setting, using temporal-difference (TD) learning with linear function approximation to handle large state spaces in practice. The goal of a group of agents is to collaboratively learn the value function of a given policy from locally private rewards observed in a shared environment, through exchanging local estimates with neighbors. Despite their simplicity and widespread use, our theoretical understanding of such decentralized TD learning algorithms remains limited. Existing results were obtained based on i.i.d. data samples, or by imposing an `additional' projection step to control the `gradient' bias incurred by the Markovian observations. In this paper, we provide a finite-sample analysis of the fully decentralized TD(0) learning under both i.i.d. as well as Markovian samples, and prove that all local estimates converge linearly to a small neighborhood of the optimum. The resultant error bounds are the first of its type—in the sense that they hold under the most practical assumptions —which is made possible by means of a novel multi-step Lyapunov analysis.
READ FULL TEXT
Comments
There are no comments yet.