A Deeper Look at Experience Replay

12/04/2017
by   Shangtong Zhang, et al.
0

Experience replay plays an important role in the success of deep reinforcement learning (RL) by helping stabilize the neural networks. It has become a new norm in deep RL algorithms. In this paper, however, we showcase that varying the size of the experience replay buffer can hurt the performance even in very simple tasks. The size of the replay buffer is actually a hyper-parameter which needs careful tuning. Moreover, our study of experience replay leads to the formulation of the Combined DQN algorithm, which can significantly outperform primitive DQN in some tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/16/2023

Temporal Difference Learning with Experience Replay

Temporal-difference (TD) learning is widely regarded as one of the most ...
research
06/07/2022

Look Back When Surprised: Stabilizing Reverse Experience Replay for Neural Approximation

Experience replay methods, which are an essential part of reinforcement ...
research
10/04/2021

Large Batch Experience Replay

Several algorithms have been proposed to sample non-uniformly the replay...
research
02/09/2021

Reverb: A Framework For Experience Replay

A central component of training in Reinforcement Learning (RL) is Experi...
research
07/13/2020

Revisiting Fundamentals of Experience Replay

Experience replay is central to off-policy algorithms in deep reinforcem...
research
09/20/2018

Dynamic Weights in Multi-Objective Deep Reinforcement Learning

Many real-world decision problems are characterized by multiple objectiv...
research
06/09/2023

The Role of Diverse Replay for Generalisation in Reinforcement Learning

In reinforcement learning (RL), key components of many algorithms are th...

Please sign up or login with your details

Forgot password? Click here to reset