Replay Buffer With Local Forgetting for Adaptive Deep Model-Based Reinforcement Learning

03/15/2023
by   Ali Rahimi-Kalahroudi, et al.
0

One of the key behavioral characteristics used in neuroscience to determine whether the subject of study – be it a rodent or a human – exhibits model-based learning is effective adaptation to local changes in the environment. In reinforcement learning, however, recent work has shown that modern deep model-based reinforcement-learning (MBRL) methods adapt poorly to such changes. An explanation for this mismatch is that MBRL methods are typically designed with sample-efficiency on a single task in mind and the requirements for effective adaptation are substantially higher, both in terms of the learned world model and the planning routine. One particularly challenging requirement is that the learned world model has to be sufficiently accurate throughout relevant parts of the state-space. This is challenging for deep-learning-based world models due to catastrophic forgetting. And while a replay buffer can mitigate the effects of catastrophic forgetting, the traditional first-in-first-out replay buffer precludes effective adaptation due to maintaining stale data. In this work, we show that a conceptually simple variation of this traditional replay buffer is able to overcome this limitation. By removing only samples from the buffer from the local neighbourhood of the newly observed samples, deep world models can be built that maintain their accuracy across the state-space, while also being able to effectively adapt to changes in the reward function. We demonstrate this by applying our replay-buffer variation to a deep version of the classical Dyna method, as well as to recent methods such as PlaNet and DreamerV2, demonstrating that deep model-based methods can adapt effectively as well to local changes in the environment.

READ FULL TEXT

page 5

page 6

page 8

page 13

page 17

page 19

research
05/22/2022

Memory-efficient Reinforcement Learning with Knowledge Consolidation

Artificial neural networks are promising as general function approximato...
research
06/28/2023

Curious Replay for Model-based Adaptation

Agents must be able to adapt quickly as an environment changes. We find ...
research
04/25/2022

Towards Evaluating Adaptivity of Model-Based Reinforcement Learning Methods

In recent years, a growing number of deep model-based reinforcement lear...
research
12/18/2019

Hierarchical Deep Q-Network with Forgetting from Imperfect Demonstrations in Minecraft

We present hierarchical Deep Q-Network with Forgetting (HDQF) that took ...
research
10/01/2021

Sim and Real: Better Together

Simulation is used extensively in autonomous systems, particularly in ro...
research
08/09/2022

Model-Free Generative Replay for Lifelong Reinforcement Learning: Application to Starcraft-2

One approach to meet the challenges of deep lifelong reinforcement learn...
research
07/30/2022

Reinforcement learning with experience replay and adaptation of action dispersion

Effective reinforcement learning requires a proper balance of exploratio...

Please sign up or login with your details

Forgot password? Click here to reset