Log In Sign Up

Continual Learning: Fast and Slow

by   Quang Pham, et al.

According to the Complementary Learning Systems (CLS) theory <cit.> in neuroscience, humans do effective continual learning through two complementary systems: a fast learning system centered on the hippocampus for rapid learning of the specifics, individual experiences; and a slow learning system located in the neocortex for the gradual acquisition of structured knowledge about the environment. Motivated by this theory, we propose DualNets (for Dual Networks), a general continual learning framework comprising a fast learning system for supervised learning of pattern-separated representation from specific tasks and a slow learning system for representation learning of task-agnostic general representation via Self-Supervised Learning (SSL). DualNets can seamlessly incorporate both representation types into a holistic framework to facilitate better continual learning in deep neural networks. Via extensive experiments, we demonstrate the promising results of DualNets on a wide range of continual learning protocols, ranging from the standard offline, task-aware setting to the challenging online, task-free scenario. Notably, on the CTrL <cit.> benchmark that has unrelated tasks with vastly different visual images, DualNets can achieve competitive performance with existing state-of-the-art dynamic architecture strategies <cit.>. Furthermore, we conduct comprehensive ablation studies to validate DualNets efficacy, robustness, and scalability. Code is publicly available at <>.


page 2

page 5

page 16


DualNet: Continual Learning, Fast and Slow

According to Complementary Learning Systems (CLS) theory <cit.> in neuro...

Learning Fast, Learning Slow: A General Continual Learning Method based on Complementary Learning System

Humans excel at continually learning from an ever-changing environment w...

Label-Efficient Online Continual Object Detection in Streaming Video

To thrive in evolving environments, humans are capable of continual acqu...

Continual Learning for Text Classification with Information Disentanglement Based Regularization

Continual learning has become increasingly important as it enables NLP m...

Continual Learning via Inter-Task Synaptic Mapping

Learning from streaming tasks leads a model to catastrophically erase un...

Few-Shot and Continual Learning with Attentive Independent Mechanisms

Deep neural networks (DNNs) are known to perform well when deployed to t...

Direction Concentration Learning: Enhancing Congruency in Machine Learning

One of the well-known challenges in computer vision tasks is the visual ...