DeepAI AI Chat
Log In Sign Up

Finite-Sample Analysis for Two Time-scale Non-linear TDC with General Smooth Function Approximation

04/07/2021
by   Yue Wang, et al.
0

Temporal-difference learning with gradient correction (TDC) is a two time-scale algorithm for policy evaluation in reinforcement learning. This algorithm was initially proposed with linear function approximation, and was later extended to the one with general smooth function approximation. The asymptotic convergence for the on-policy setting with general smooth function approximation was established in [bhatnagar2009convergent], however, the finite-sample analysis remains unsolved due to challenges in the non-linear and two-time-scale update structure, non-convex objective function and the time-varying projection onto a tangent plane. In this paper, we develop novel techniques to explicitly characterize the finite-sample error bound for the general off-policy setting with i.i.d. or Markovian samples, and show that it converges as fast as 𝒪(1/√(T)) (up to a factor of 𝒪(log T)). Our approach can be applied to a wide range of value-based reinforcement learning algorithms with general smooth function approximation.

READ FULL TEXT

page 1

page 2

page 3

page 4

05/20/2020

Finite-sample Analysis of Greedy-GQ with Linear Function Approximation under Markovian Noise

Greedy-GQ is an off-policy two timescale algorithm for optimal control i...
01/11/2023

An Analysis of Quantile Temporal-Difference Learning

We analyse quantile temporal-difference learning (QTD), a distributional...
10/11/2019

Zap Q-Learning With Nonlinear Function Approximation

The Zap stochastic approximation (SA) algorithm was introduced recently ...
11/10/2022

When is Realizability Sufficient for Off-Policy Reinforcement Learning?

Model-free algorithms for reinforcement learning typically require a con...
05/25/2018

Finite Sample Analysis of LSTD with Random Projections and Eligibility Traces

Policy evaluation with linear function approximation is an important pro...
09/26/2019

Two Time-scale Off-Policy TD Learning: Non-asymptotic Analysis over Markovian Samples

Gradient-based temporal difference (GTD) algorithms are widely used in o...
02/02/2019

Non-asymptotic Analysis of Biased Stochastic Approximation Scheme

Stochastic approximation (SA) is a key method used in statistical learni...