Bootstrapping a DQN Replay Memory with Synthetic Experiences

An important component of many Deep Reinforcement Learning algorithms is the Experience Replay which serves as a storage mechanism or memory of made experiences. These experiences are used for training and help the agent to stably find the perfect trajectory through the problem space. The classic Experience Replay however makes only use of the experiences it actually made, but the stored samples bear great potential in form of knowledge about the problem that can be extracted. We present an algorithm that creates synthetic experiences in a nondeterministic discrete environment to assist the learner. The Interpolated Experience Replay is evaluated on the FrozenLake environment and we show that it can support the agent to learn faster and even better than the classic version.

READ FULL TEXT
research
01/09/2018

Deep In-GPU Experience Replay

Experience replay allows a reinforcement learning agent to train on samp...
research
05/25/2019

Prioritized Sequence Experience Replay

Experience replay is widely used in deep reinforcement learning algorith...
research
10/18/2017

The Effects of Memory Replay in Reinforcement Learning

Experience replay is a key technique behind many recent advances in deep...
research
10/18/2016

Online Contrastive Divergence with Generative Replay: Experience Replay without Storing Data

Conceived in the early 1990s, Experience Replay (ER) has been shown to b...
research
11/23/2020

Interpersonalizing Intimate Museum Experiences

We reflect on two museum visiting experiences that adopted the strategy ...
research
02/13/2023

Sources of Richness and Ineffability for Phenomenally Conscious States

Conscious states (states that there is something it is like to be in) se...
research
07/16/2022

Associative Memory Based Experience Replay for Deep Reinforcement Learning

Experience replay is an essential component in deep reinforcement learni...

Please sign up or login with your details

Forgot password? Click here to reset