Decorrelated Double Q-learning

06/12/2020
by   Gang Chen, et al.
0

Q-learning with value function approximation may have the poor performance because of overestimation bias and imprecise estimate. Specifically, overestimation bias is from the maximum operator over noise estimate, which is exaggerated using the estimate of a subsequent state. Inspired by the recent advance of deep reinforcement learning and Double Q-learning, we introduce the decorrelated double Q-learning (D2Q). Specifically, we introduce the decorrelated regularization item to reduce the correlation between value function approximators, which can lead to less biased estimation and low variance. The experimental results on a suite of MuJoCo continuous control tasks demonstrate that our decorrelated double Q-learning can effectively improve the performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/06/2021

Efficient Continuous Control with Double Actors and Regularized Critics

How to obtain good value estimation is one of the key problems in Reinfo...
research
09/29/2020

Cross Learning in Deep Q-Networks

In this work, we propose a novel cross Q-learning algorithm, aim at alle...
research
09/29/2021

On the Estimation Bias in Double Q-Learning

Double Q-learning is a classical method for reducing overestimation bias...
research
05/23/2019

Recurrent Value Functions

Despite recent successes in Reinforcement Learning, value-based methods ...
research
03/22/2022

Action Candidate Driven Clipped Double Q-learning for Discrete and Continuous Action Tasks

Double Q-learning is a popular reinforcement learning algorithm in Marko...
research
12/31/2019

The Gambler's Problem and Beyond

We analyze the Gambler's problem, a simple reinforcement learning proble...
research
02/28/2021

Ensemble Bootstrapping for Q-Learning

Q-learning (QL), a common reinforcement learning algorithm, suffers from...

Please sign up or login with your details

Forgot password? Click here to reset