Action Candidate Based Clipped Double Q-learning for Discrete and Continuous Action Tasks

05/03/2021
by   Haobo Jiang, et al.
0

Double Q-learning is a popular reinforcement learning algorithm in Markov decision process (MDP) problems. Clipped Double Q-learning, as an effective variant of Double Q-learning, employs the clipped double estimator to approximate the maximum expected action value. Due to the underestimation bias of the clipped double estimator, performance of clipped Double Q-learning may be degraded in some stochastic environments. In this paper, in order to reduce the underestimation bias, we propose an action candidate based clipped double estimator for Double Q-learning. Specifically, we first select a set of elite action candidates with the high action values from one set of estimators. Then, among these candidates, we choose the highest valued action from the other set of estimators. Finally, we use the maximum value in the second set of estimators to clip the action value of the chosen action in the first set of estimators and the clipped value is used for approximating the maximum expected action value. Theoretically, the underestimation bias in our clipped Double Q-learning decays monotonically as the number of the action candidates decreases. Moreover, the number of action candidates controls the trade-off between the overestimation and underestimation biases. In addition, we also extend our clipped Double Q-learning to continuous action tasks via approximating the elite continuous action candidates. We empirically verify that our algorithm can more accurately estimate the maximum expected action value on some toy environments and yield good performance on several benchmark problems.

READ FULL TEXT

page 5

page 6

page 14

research
03/22/2022

Action Candidate Driven Clipped Double Q-learning for Discrete and Continuous Action Tasks

Double Q-learning is a popular reinforcement learning algorithm in Marko...
research
01/20/2022

Two-Sample Testing in Reinforcement Learning

Value-based reinforcement-learning algorithms have shown strong performa...
research
02/16/2020

Maxmin Q-learning: Controlling the Estimation Bias of Q-learning

Q-learning suffers from overestimation bias, because it approximates the...
research
08/09/2021

Modified Double DQN: addressing stability

Inspired by double q learning algorithm, the double DQN algorithm was or...
research
02/28/2021

Ensemble Bootstrapping for Q-Learning

Q-learning (QL), a common reinforcement learning algorithm, suffers from...
research
12/01/2017

A double competitive strategy based learning automata algorithm

Learning Automata (LA) are considered as one of the most powerful tools ...

Please sign up or login with your details

Forgot password? Click here to reset