Reverse Experience Replay

10/19/2019
by   Egor Rotinov, et al.
0

This paper describes an improvement in Deep Q-learning called Reverse Experience Replay (also RER) that solves the problem of sparse rewards and helps to deal with reward maximizing tasks by sampling transitions successively in reverse order. On tasks with enough experience for training and enough Experience Replay memory capacity, Deep Q-learning Network with Reverse Experience Replay shows competitive results against both Double DQN, with a standard Experience Replay, and vanilla DQN. Also, RER achieves significantly increased results in tasks with a lack of experience and Replay memory capacity.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/15/2018

Advances in Experience Replay

This project combines recent advances in experience replay techniques, n...
research
02/23/2021

A Robotic Model of Hippocampal Reverse Replay for Reinforcement Learning

Hippocampal reverse replay is thought to contribute to learning, and par...
research
05/18/2019

Combining Experience Replay with Exploration by Random Network Distillation

Our work is a simple extension of the paper "Exploration by Random Netwo...
research
04/19/2023

Quantum deep Q learning with distributed prioritized experience replay

This paper introduces the QDQN-DPER framework to enhance the efficiency ...
research
01/03/2018

ViZDoom: DRQN with Prioritized Experience Replay, Double-Q Learning, & Snapshot Ensembling

ViZDoom is a robust, first-person shooter reinforcement learning environ...
research
08/20/2018

Learning to Dialogue via Complex Hindsight Experience Replay

Reinforcement learning methods have been used for learning dialogue poli...
research
04/18/2019

Particle Filter on Episode

Differently from animals, robots can record its experience correctly for...

Please sign up or login with your details

Forgot password? Click here to reset