Continual Learning with Self-Organizing Maps

04/19/2019
by   Pouya Bashivan, et al.
26

Despite remarkable successes achieved by modern neural networks in a wide range of applications, these networks perform best in domain-specific stationary environments where they are trained only once on large-scale controlled data repositories. When exposed to non-stationary learning environments, current neural networks tend to forget what they had previously learned, a phenomena known as catastrophic forgetting. Most previous approaches to this problem rely on memory replay buffers which store samples from previously learned tasks, and use them to regularize the learning on new ones. This approach suffers from the important disadvantage of not scaling well to real-life problems in which the memory requirements become enormous. We propose a memoryless method that combines standard supervised neural networks with self-organizing maps to solve the continual learning problem. The role of the self-organizing map is to adaptively cluster the inputs into appropriate task contexts - without explicit labels - and allocate network resources accordingly. Thus, it selectively routes the inputs in accord with previous experience, ensuring that past learning is maintained and does not interfere with current learning. Out method is intuitive, memoryless, and performs on par with current state-of-the-art approaches on standard benchmarks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/07/2019

Continual Learning in Neural Networks

Artificial neural networks have exceeded human-level performance in acco...
research
07/01/2020

Continual Learning: Tackling Catastrophic Forgetting in Deep Neural Networks with Replay Processes

Humans learn all their life long. They accumulate knowledge from a seque...
research
07/11/2022

Consistency is the key to further mitigating catastrophic forgetting in continual learning

Deep neural networks struggle to continually learn multiple sequential t...
research
10/29/2019

Overcoming Catastrophic Interference in Online Reinforcement Learning with Dynamic Self-Organizing Maps

Using neural networks in the reinforcement learning (RL) framework has a...
research
10/18/2021

Dendritic Self-Organizing Maps for Continual Learning

Current deep learning architectures show remarkable performance when tra...
research
09/09/2019

Efficient Continual Learning in Neural Networks with Embedding Regularization

Continual learning of deep neural networks is a key requirement for scal...
research
10/14/2022

Sequential Learning Of Neural Networks for Prequential MDL

Minimum Description Length (MDL) provides a framework and an objective f...

Please sign up or login with your details

Forgot password? Click here to reset