Learning Representations on the Unit Sphere: Application to Online Continual Learning

06/06/2023
by   Nicolas Michel, et al.
0

We use the maximum a posteriori estimation principle for learning representations distributed on the unit sphere. We derive loss functions for the von Mises-Fisher distribution and the angular Gaussian distribution, both designed for modeling symmetric directional data. A noteworthy feature of our approach is that the learned representations are pushed toward fixed directions, allowing for a learning strategy that is resilient to data drift. This makes it suitable for online continual learning, which is the problem of training neural networks on a continuous data stream, where multiple classification tasks are presented sequentially so that data from past tasks are no longer accessible, and data from the current task can be seen only once. To address this challenging scenario, we propose a memory-based representation learning technique equipped with our new loss functions. Our approach does not require negative data or knowledge of task boundaries and performs well with smaller batch sizes while being computationally efficient. We demonstrate with extensive experiments that the proposed method outperforms the current state-of-the-art methods on both standard evaluation scenarios and realistic scenarios with blurry task boundaries. For reproducibility, we use the same training pipeline for every compared method and share the code at https://t.ly/SQTj.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/29/2022

Online Continual Learning on a Contaminated Data Stream with Blurry Task Boundaries

Learning under a continuously changing data distribution with incorrect ...
research
11/30/2022

Continual Learning with Distributed Optimization: Does CoCoA Forget?

We focus on the continual learning problem where the tasks arrive sequen...
research
03/20/2019

Online continual learning with no task boundaries

Continual learning is the ability of an agent to learn online with a non...
research
09/01/2023

New metrics for analyzing continual learners

Deep neural networks have shown remarkable performance when trained on i...
research
05/25/2023

Batch Model Consolidation: A Multi-Task Model Consolidation Framework

In Continual Learning (CL), a model is required to learn a stream of tas...
research
08/18/2023

Online Class Incremental Learning on Stochastic Blurry Task Boundary via Mask and Visual Prompt Tuning

Continual learning aims to learn a model from a continuous stream of dat...
research
12/16/2021

Effective prevention of semantic drift as angular distance in memory-less continual deep neural networks

Lifelong machine learning or continual learning models attempt to learn ...

Please sign up or login with your details

Forgot password? Click here to reset