Analysis of Q-learning with Adaptation and Momentum Restart for Gradient Descent

07/15/2020
by   Bowen Weng, et al.
0

Existing convergence analyses of Q-learning mostly focus on the vanilla stochastic gradient descent (SGD) type of updates. Despite the Adaptive Moment Estimation (Adam) has been commonly used for practical Q-learning algorithms, there has not been any convergence guarantee provided for Q-learning with such type of updates. In this paper, we first characterize the convergence rate for Q-AMSGrad, which is the Q-learning algorithm with AMSGrad update (a commonly adopted alternative of Adam for theoretical analysis). To further improve the performance, we propose to incorporate the momentum restart scheme to Q-AMSGrad, resulting in the so-called Q-AMSGradR algorithm. The convergence rate of Q-AMSGradR is also established. Our experiments on a linear quadratic regulator problem show that the two proposed Q-learning algorithms outperform the vanilla Q-learning with SGD updates. The two algorithms also exhibit significantly better performance than the DQN learning method over a batch of Atari 2600 games.

READ FULL TEXT
research
10/01/2018

Optimal Adaptive and Accelerated Stochastic Gradient Descent

Stochastic gradient descent (Sgd) methods are the most powerful optimiza...
research
05/07/2019

Accelerated Target Updates for Q-learning

This paper studies accelerations in Q-learning algorithms. We propose an...
research
07/30/2020

Momentum Q-learning with Finite-Sample Convergence Guarantee

Existing studies indicate that momentum ideas in conventional optimizati...
research
05/26/2019

Stochastic Gradient Methods with Block Diagonal Matrix Adaptation

Adaptive gradient approaches that automatically adjust the learning rate...
research
08/20/2021

Practical and Fast Momentum-Based Power Methods

The power method is a classical algorithm with broad applications in mac...
research
02/16/2021

Complex Momentum for Learning in Games

We generalize gradient descent with momentum for learning in differentia...
research
06/12/2022

Stochastic Gradient Descent without Full Data Shuffle

Stochastic gradient descent (SGD) is the cornerstone of modern machine l...

Please sign up or login with your details

Forgot password? Click here to reset