A Multistep Lyapunov Approach for Finite-Time Analysis of Biased Stochastic Approximation

09/10/2019
by   Gang Wang, et al.
13

Motivated by the widespread use of temporal-difference (TD-) and Q-learning algorithms in reinforcement learning, this paper studies a class of biased stochastic approximation (SA) procedures under a mild "ergodic-like" assumption on the underlying stochastic noise sequence. Building upon a carefully designed multistep Lyapunov function that looks ahead to several future updates to accommodate the stochastic perturbations (for control of the gradient bias), we prove a general result on the convergence of the iterates, and use it to derive non-asymptotic bounds on the mean-square error in the case of constant stepsizes. This novel looking-ahead viewpoint renders finite-time analysis of biased SA algorithms under a large family of stochastic perturbations possible. For direct comparison with existing contributions, we also demonstrate these bounds by applying them to TD- and Q-learning with linear function approximation, under the practical Markov chain observation model. The resultant finite-time error bound for both the TD- as well as the Q-learning algorithms is the first of its kind, in the sense that it holds i) for the unmodified versions (i.e., without making any modifications to the parameter updates) using even nonlinear function approximators; as well as for Markov chains ii) under general mixing conditions and iii) starting from any initial distribution, at least one of which has to be violated for existing results to be applicable.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/27/2019

Finite-Time Analysis of Q-Learning with Linear Function Approximation

In this paper, we consider the model-free reinforcement learning problem...
research
11/10/2022

Error bound analysis of the stochastic parareal algorithm

Stochastic parareal (SParareal) is a probabilistic variant of the popula...
research
01/30/2021

On the Stability of Random Matrix Product with Markovian Noise: Application to Linear Stochastic Approximation and TD Learning

This paper studies the exponential stability of random matrix products d...
research
10/24/2020

An Adiabatic Theorem for Policy Tracking with TD-learning

We evaluate the ability of temporal difference learning to track the rew...
research
04/04/2021

Finite-Time Convergence Rates of Nonlinear Two-Time-Scale Stochastic Approximation under Markovian Noise

We study the so-called two-time-scale stochastic approximation, a simula...
research
10/03/2022

Bias and Extrapolation in Markovian Linear Stochastic Approximation with Constant Stepsizes

We consider Linear Stochastic Approximation (LSA) with a constant stepsi...
research
07/24/2021

Theoretical Study and Comparison of SPSA and RDSA Algorithms

Stochastic approximation (SA) algorithms are widely used in system optim...

Please sign up or login with your details

Forgot password? Click here to reset