Reducing Estimation Bias via Weighted Delayed Deep Deterministic Policy Gradient
The overestimation phenomenon caused by function approximation is a well-known issue in value-based reinforcement learning algorithms such as deep Q-networks and DDPG, which could lead to suboptimal policies. To address this issue, TD3 takes the minimum value between a pair of critics, which introduces underestimation bias. By unifying these two opposites, we propose a novel Weighted Delayed Deep Deterministic Policy Gradient algorithm, which can reduce the estimation error and further improve the performance by weighting a pair of critics. We compare the learning process of value function between DDPG, TD3, and our proposed algorithm, which verifies that our algorithm could indeed eliminate the estimation error of value function. We evaluate our algorithm in the OpenAI Gym continuous control tasks, outperforming the state-of-the-art algorithms on every environment tested.
READ FULL TEXT