Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and Variance Reduction

06/04/2020
by   Gen Li, et al.
11

Asynchronous Q-learning aims to learn the optimal action-value function (or Q-function) of a Markov decision process (MDP), based on a single trajectory of Markovian samples induced by a behavior policy. Focusing on a γ-discounted MDP with state space 𝒮 and action space 𝒜, we demonstrate that the ℓ_∞-based sample complexity of classical asynchronous Q-learning – namely, the number of samples needed to yield an entrywise ε-accurate estimate of the Q-function – is at most on the order of 1/μ_𝗆𝗂𝗇(1-γ)^5ε^2+ t_𝗆𝗂𝗑/μ_𝗆𝗂𝗇(1-γ) up to some logarithmic factor, provided that a proper constant learning rate is adopted. Here, t_𝗆𝗂𝗑 and μ_𝗆𝗂𝗇 denote respectively the mixing time and the minimum state-action occupancy probability of the sample trajectory. The first term of this bound matches the complexity in the case with independent samples drawn from the stationary distribution of the trajectory. The second term reflects the expense taken for the empirical distribution of the Markovian trajectory to reach a steady state, which is incurred at the very beginning and becomes amortized as the algorithm runs. Encouragingly, the above bound improves upon the state-of-the-art result by a factor of at least |𝒮||𝒜|. Further, the scaling on the discount complexity can be improved by means of variance reduction.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/12/2021

Is Q-Learning Minimax Optimal? A Tight Sample Complexity Analysis

Q-learning, which seeks to learn the optimal Q-function of a Markov deci...
research
06/27/2012

On the Sample Complexity of Reinforcement Learning with a Generative Model

We consider the problem of learning the optimal action-value function in...
research
03/14/2022

The Efficacy of Pessimism in Asynchronous Q-Learning

This paper is concerned with the asynchronous form of Q-learning, which ...
research
06/11/2019

Variance-reduced Q-learning is minimax optimal

We introduce and analyze a form of variance-reduced Q-learning. For γ-di...
research
05/18/2023

The Blessing of Heterogeneity in Federated Q-learning: Linear Speedup and Beyond

When the data used for reinforcement learning (RL) are collected by mult...
research
03/08/2023

Policy Mirror Descent Inherently Explores Action Space

Designing computationally efficient exploration strategies for on-policy...
research
02/18/2023

Stochastic Approximation Approaches to Group Distributionally Robust Optimization

This paper investigates group distributionally robust optimization (GDRO...

Please sign up or login with your details

Forgot password? Click here to reset