-
Experience Replay for Continual Learning
Continual learning is the problem of learning new tasks or knowledge whi...
read it
-
Continual Graph Learning
Graph Neural Networks (GNNs) have recently received significant research...
read it
-
Complementary Learning for Overcoming Catastrophic Forgetting Using Experience Replay
Despite huge success, deep networks are unable to learn effectively in s...
read it
-
Hierarchical Deep Q-Network with Forgetting from Imperfect Demonstrations in Minecraft
We present hierarchical Deep Q-Network with Forgetting (HDQF) that took ...
read it
-
Generation and Consolidation of Recollections for Efficient Deep Lifelong Learning
Deep lifelong learning systems need to efficiently manage resources to s...
read it
-
Prioritizing Starting States for Reinforcement Learning
Online, off-policy reinforcement learning algorithms are able to use an ...
read it
-
Meta-Learning with Sparse Experience Replay for Lifelong Language Learning
Lifelong learning requires models that can continuously learn from seque...
read it
Selective Experience Replay for Lifelong Learning
Deep reinforcement learning has emerged as a powerful tool for a variety of learning tasks, however deep nets typically exhibit forgetting when learning multiple tasks in sequence. To mitigate forgetting, we propose an experience replay process that augments the standard FIFO buffer and selectively stores experiences in a long-term memory. We explore four strategies for selecting which experiences will be stored: favoring surprise, favoring reward, matching the global training distribution, and maximizing coverage of the state space. We show that distribution matching successfully prevents catastrophic forgetting, and is consistently the best approach on all domains tested. While distribution matching has better and more consistent performance, we identify one case in which coverage maximization is beneficial - when tasks that receive less trained are more important. Overall, our results show that selective experience replay, when suitable selection algorithms are employed, can prevent catastrophic forgetting.
READ FULL TEXT
Comments
There are no comments yet.