Finite-Time Performance of Distributed Temporal Difference Learning with Linear Function Approximation

07/25/2019
by   Thinh T. Doan, et al.
0

We study the policy evaluation problem in multi-agent reinforcement learning, where a group of agents operates in a common environment. In this problem, the goal of the agents is to cooperatively evaluate the global discounted accumulative reward, which is composed of local rewards observed by the agents. Over a series of time steps, the agents act, get rewarded, update their local estimate of the value function, then communicate with their neighbors. The local update at each agent can be interpreted as a distributed variant of the popular temporal difference learning methods TD(λ). Our main contribution is to provide a finite-analysis on the performance of this distributed TD(λ) for both constant and time-varying step sizes. The key idea in our analysis is to utilize the geometric mixing time τ of the underlying Markov chain, that is, although the "noise" in our algorithm is Markovian, their dependence is almost weakened out every τ step. In particular, we provide an explicit formula for the upper bound on the rates of the proposed method as a function of the network topology, the discount factor, the constant λ, and the mixing time τ. Our results theoretically address some numerical observations of TD(λ), that is, λ=1 gives the best approximation of the function values while λ = 0 leads to better performance when there is a large variance in the algorithm. Our results complement the existing literature, where such an explicit formula for the rates of distributed TD(λ) is not available.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset