Finite-Time Performance of Distributed Temporal Difference Learning with Linear Function Approximation

07/25/2019
by   Thinh T. Doan, et al.
0

We study the policy evaluation problem in multi-agent reinforcement learning, where a group of agents operates in a common environment. In this problem, the goal of the agents is to cooperatively evaluate the global discounted accumulative reward, which is composed of local rewards observed by the agents. Over a series of time steps, the agents act, get rewarded, update their local estimate of the value function, then communicate with their neighbors. The local update at each agent can be interpreted as a distributed variant of the popular temporal difference learning methods TD(λ). Our main contribution is to provide a finite-analysis on the performance of this distributed TD(λ) for both constant and time-varying step sizes. The key idea in our analysis is to utilize the geometric mixing time τ of the underlying Markov chain, that is, although the "noise" in our algorithm is Markovian, their dependence is almost weakened out every τ step. In particular, we provide an explicit formula for the upper bound on the rates of the proposed method as a function of the network topology, the discount factor, the constant λ, and the mixing time τ. Our results theoretically address some numerical observations of TD(λ), that is, λ=1 gives the best approximation of the function values while λ = 0 leads to better performance when there is a large variance in the algorithm. Our results complement the existing literature, where such an explicit formula for the rates of distributed TD(λ) is not available.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/24/2020

Local Stochastic Approximation: A Unified View of Federated Learning and Distributed Multi-Task Reinforcement Learning Algorithms

Motivated by broad applications in reinforcement learning and federated ...
research
05/27/2019

Finite-Time Analysis of Q-Learning with Linear Function Approximation

In this paper, we consider the model-free reinforcement learning problem...
research
11/03/2019

Finite-Sample Analysis of Decentralized Temporal-Difference Learning with Linear Function Approximation

Motivated by the emerging use of multi-agent reinforcement learning (MAR...
research
06/18/2020

Distributed Value Function Approximation for Collaborative Multi-Agent Reinforcement Learning

In this paper we propose novel distributed gradient-based temporal diffe...
research
11/07/2022

Policy evaluation from a single path: Multi-step methods, mixing and mis-specification

We study non-parametric estimation of the value function of an infinite-...
research
01/18/2021

Learning Successor States and Goal-Dependent Values: A Mathematical Viewpoint

In reinforcement learning, temporal difference-based algorithms can be s...
research
09/25/2021

Distributed Online Optimization with Byzantine Adversarial Agents

We study the problem of non-constrained, discrete-time, online distribut...

Please sign up or login with your details

Forgot password? Click here to reset