On the Effectiveness of Equivariant Regularization for Robust Online Continual Learning

05/05/2023
by   Lorenzo Bonicelli, et al.
0

Humans can learn incrementally, whereas neural networks forget previously acquired information catastrophically. Continual Learning (CL) approaches seek to bridge this gap by facilitating the transfer of knowledge to both previous tasks (backward transfer) and future ones (forward transfer) during training. Recent research has shown that self-supervision can produce versatile models that can generalize well to diverse downstream tasks. However, contrastive self-supervised learning (CSSL), a popular self-supervision technique, has limited effectiveness in online CL (OCL). OCL only permits one iteration of the input dataset, and CSSL's low sample efficiency hinders its use on the input data-stream. In this work, we propose Continual Learning via Equivariant Regularization (CLER), an OCL approach that leverages equivariant tasks for self-supervision, avoiding CSSL's limitations. Our method represents the first attempt at combining equivariant knowledge with CL and can be easily integrated with existing OCL methods. Extensive ablations shed light on how equivariant pretext tasks affect the network's information flow and its impact on CL dynamics.

READ FULL TEXT

page 3

page 6

page 7

research
08/14/2022

A Theory for Knowledge Transfer in Continual Learning

Continual learning of a stream of tasks is an active area in deep neural...
research
08/01/2019

Continual Learning via Online Leverage Score Sampling

In order to mimic the human ability of continual acquisition and transfe...
research
07/05/2021

Continual Contrastive Self-supervised Learning for Image Classification

For artificial learning systems, continual learning over time from a str...
research
09/28/2020

Sense and Learn: Self-Supervision for Omnipresent Sensors

Learning general-purpose representations from multisensor data produced ...
research
03/16/2022

ConTinTin: Continual Learning from Task Instructions

The mainstream machine learning paradigms for NLP often work with two un...
research
03/13/2021

Online Learning of Objects through Curiosity-Driven Active Learning

Children learn continually by asking questions about the concepts they a...
research
11/18/2018

Self-Organizing Maps for Storage and Transfer of Knowledge in Reinforcement Learning

The idea of reusing or transferring information from previously learned ...

Please sign up or login with your details

Forgot password? Click here to reset