Synthetic Experience Replay

by   Cong Lu, et al.

A key theme in the past decade has been that when large neural networks and large datasets combine they can produce remarkable results. In deep reinforcement learning (RL), this paradigm is commonly made possible through experience replay, whereby a dataset of past experiences is used to train a policy or value function. However, unlike in supervised or self-supervised learning, an RL agent has to collect its own data, which is often limited. Thus, it is challenging to reap the benefits of deep learning, and even small neural networks can overfit at the start of training. In this work, we leverage the tremendous recent progress in generative modeling and propose Synthetic Experience Replay (SynthER), a diffusion-based approach to arbitrarily upsample an agent's collected experience. We show that SynthER is an effective method for training RL agents across offline and online settings. In offline settings, we observe drastic improvements both when upsampling small offline datasets and when training larger networks with additional synthetic data. Furthermore, SynthER enables online agents to train with a much higher update-to-data ratio than before, leading to a large increase in sample efficiency, without any algorithmic changes. We believe that synthetic training data could open the door to realizing the full potential of deep learning for replay-based RL algorithms from limited data.


Stratified Experience Replay: Correcting Multiplicity Bias in Off-Policy Reinforcement Learning

Deep Reinforcement Learning (RL) methods rely on experience replay to ap...

Collect Infer – a fresh look at data-efficient Reinforcement Learning

This position paper proposes a fresh look at Reinforcement Learning (RL)...

The Benefits of Model-Based Generalization in Reinforcement Learning

Model-Based Reinforcement Learning (RL) is widely believed to have the p...

Discrete-to-Deep Supervised Policy Learning

Neural networks are effective function approximators, but hard to train ...

Revisiting Prioritized Experience Replay: A Value Perspective

Experience replay enables off-policy reinforcement learning (RL) agents ...

B2RL: An open-source Dataset for Building Batch Reinforcement Learning

Batch reinforcement learning (BRL) is an emerging research area in the R...

Online Contrastive Divergence with Generative Replay: Experience Replay without Storing Data

Conceived in the early 1990s, Experience Replay (ER) has been shown to b...

Please sign up or login with your details

Forgot password? Click here to reset