On the Estimation Bias in Double Q-Learning

09/29/2021
by   Zhizhou Ren, et al.
0

Double Q-learning is a classical method for reducing overestimation bias, which is caused by taking maximum estimated values in the Bellman operation. Its variants in the deep Q-learning paradigm have shown great promise in producing reliable value prediction and improving learning performance. However, as shown by prior work, double Q-learning is not fully unbiased and suffers from underestimation bias. In this paper, we show that such underestimation bias may lead to multiple non-optimal fixed points under an approximated Bellman operator. To address the concerns of converging to non-optimal stationary solutions, we propose a simple but effective approach as a partial fix for the underestimation bias in double Q-learning. This approach leverages an approximate dynamic programming to bound the target value. We extensively evaluate our proposed method in the Atari benchmark tasks and demonstrate its significant improvement over baseline algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/12/2020

Decorrelated Double Q-learning

Q-learning with value function approximation may have the poor performan...
research
02/16/2020

Maxmin Q-learning: Controlling the Estimation Bias of Q-learning

Q-learning suffers from overestimation bias, because it approximates the...
research
06/06/2021

Efficient Continuous Control with Double Actors and Regularized Critics

How to obtain good value estimation is one of the key problems in Reinfo...
research
02/28/2021

Ensemble Bootstrapping for Q-Learning

Q-learning (QL), a common reinforcement learning algorithm, suffers from...
research
09/29/2020

Cross Learning in Deep Q-Networks

In this work, we propose a novel cross Q-learning algorithm, aim at alle...
research
12/21/2021

Value Activation for Bias Alleviation: Generalized-activated Deep Double Deterministic Policy Gradients

It is vital to accurately estimate the value function in Deep Reinforcem...
research
08/09/2021

Modified Double DQN: addressing stability

Inspired by double q learning algorithm, the double DQN algorithm was or...

Please sign up or login with your details

Forgot password? Click here to reset