Model-Free Generative Replay for Lifelong Reinforcement Learning: Application to Starcraft-2

08/09/2022
by   Zachary Daniels, et al.
6

One approach to meet the challenges of deep lifelong reinforcement learning (LRL) is careful management of the agent's learning experiences, to learn (without forgetting) and build internal meta-models (of the tasks, environments, agents, and world). Generative replay (GR) is a biologically inspired replay mechanism that augments learning experiences with self-labelled examples drawn from an internal generative model that is updated over time. We present a version of GR for LRL that satisfies two desiderata: (a) Introspective density modelling of the latent representations of policies learned using deep RL, and (b) Model-free end-to-end learning. In this paper, we study three deep learning architectures for model-free GR, starting from a naïve GR and adding ingredients to achieve (a) and (b). We evaluate our proposed algorithms on three different scenarios comprising tasks from the Starcraft-2 and Minigrid domains. We report several key findings showing the impact of the design choices on quantitative metrics that include transfer learning, generalization to unseen tasks, fast adaptation after task change, performance wrt task expert, and catastrophic forgetting. We observe that our GR prevents drift in the features-to-action mapping from the latent vector space of a deep RL agent. We also show improvements in established lifelong learning metrics. We find that a small random replay buffer significantly increases the stability of training. Overall, we find that "hidden replay" (a well-known architecture for class-incremental classification) is the most promising approach that pushes the state-of-the-art in GR for LRL and observe that the architecture of the sleep model might be more important for improving performance than the types of replay used. Our experiments required only 6 training samples to achieve 80-90 scenarios.

READ FULL TEXT

page 16

page 23

page 24

research
02/28/2018

Selective Experience Replay for Lifelong Learning

Deep reinforcement learning has emerged as a powerful tool for a variety...
research
05/03/2023

Map-based Experience Replay: A Memory-Efficient Solution to Catastrophic Forgetting in Reinforcement Learning

Deep Reinforcement Learning agents often suffer from catastrophic forget...
research
05/23/2023

Offline Experience Replay for Continual Offline Reinforcement Learning

The capability of continuously learning new skills via a sequence of pre...
research
02/20/2023

Understanding the effect of varying amounts of replay per step

Model-based reinforcement learning uses models to plan, where the predic...
research
03/15/2023

Replay Buffer With Local Forgetting for Adaptive Deep Model-Based Reinforcement Learning

One of the key behavioral characteristics used in neuroscience to determ...
research
07/20/2020

Learning latent representations across multiple data domains using Lifelong VAEGAN

The problem of catastrophic forgetting occurs in deep learning models tr...
research
07/12/2021

Learning Expected Emphatic Traces for Deep RL

Off-policy sampling and experience replay are key for improving sample e...

Please sign up or login with your details

Forgot password? Click here to reset