Advances in Experience Replay

05/15/2018
by   Tracy Wan, et al.
0

This project combines recent advances in experience replay techniques, namely, Combined Experience Replay (CER), Prioritized Experience Replay (PER), and Hindsight Experience Replay (HER). We show the results of combinations of these techniques with DDPG and DQN methods. CER always adds the most recent experience to the batch. PER chooses which experiences should be replayed based on how beneficial they will be towards learning. HER learns from failure by substituting the desired goal with the achieved goal and recomputing the reward function. The effectiveness of combinations of these experience replay techniques is tested in a variety of OpenAI gym environments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/09/2018

Deep In-GPU Experience Replay

Experience replay allows a reinforcement learning agent to train on samp...
research
10/19/2019

Reverse Experience Replay

This paper describes an improvement in Deep Q-learning called Reverse Ex...
research
05/18/2019

Combining Experience Replay with Exploration by Random Network Distillation

Our work is a simple extension of the paper "Exploration by Random Netwo...
research
02/13/2020

XCS Classifier System with Experience Replay

XCS constitutes the most deeply investigated classifier system today. It...
research
02/18/2021

Understanding algorithmic collusion with experience replay

In an infinitely repeated pricing game, pricing algorithms based on arti...
research
02/07/2023

Towards Robust Inductive Graph Incremental Learning via Experience Replay

Inductive node-wise graph incremental learning is a challenging task due...
research
03/03/2023

Eventual Discounting Temporal Logic Counterfactual Experience Replay

Linear temporal logic (LTL) offers a simplified way of specifying tasks ...

Please sign up or login with your details

Forgot password? Click here to reset