Integral Continual Learning Along the Tangent Vector Field of Tasks

11/23/2022
by   Tian Yu Liu, et al.
0

We propose a continual learning method which incorporates information from specialized datasets incrementally, by integrating it along the vector field of "generalist" models. The tangent plane to the specialist model acts as a generalist guide and avoids the kind of over-fitting that leads to catastrophic forgetting, while exploiting the convexity of the optimization landscape in the tangent plane. It maintains a small fixed-size memory buffer, as low as 0.4 the source datasets, which is updated by simple resampling. Our method achieves state-of-the-art across various buffer sizes for different datasets. Specifically, in the class-incremental setting we outperform the existing methods by an average of 26.24 Seq-TinyImageNet respectively. Our method can easily be combined with existing replay-based continual learning methods. When memory buffer constraints are relaxed to allow storage of other metadata such as logits, we attain state-of-the-art accuracy with an error reduction of 36 performance on Seq-CIFAR-10.

READ FULL TEXT

page 3

page 4

page 9

page 11

research
02/21/2022

Learning Bayesian Sparse Networks with Full Experience Replay for Continual Learning

Continual Learning (CL) methods aim to enable machine learning models to...
research
11/03/2018

Closed-Loop GAN for continual Learning

Sequential learning of tasks using gradient descent leads to an unremitt...
research
04/20/2023

A baseline on continual learning methods for video action recognition

Continual learning has recently attracted attention from the research co...
research
10/14/2021

Continual Learning on Noisy Data Streams via Self-Purified Replay

Continually learning in the real world must overcome many challenges, am...
research
08/25/2023

ConSlide: Asynchronous Hierarchical Interaction Transformer with Breakup-Reorganize Rehearsal for Continual Whole Slide Image Analysis

Whole slide image (WSI) analysis has become increasingly important in th...
research
04/09/2023

Does Continual Learning Equally Forget All Parameters?

Distribution shift (e.g., task or domain shift) in continual learning (C...
research
10/06/2020

The Effectiveness of Memory Replay in Large Scale Continual Learning

We study continual learning in the large scale setting where tasks in th...

Please sign up or login with your details

Forgot password? Click here to reset