In Defense of the Learning Without Forgetting for Task Incremental Learning

07/26/2021
by   Guy Oren, et al.
0

Catastrophic forgetting is one of the major challenges on the road for continual learning systems, which are presented with an on-line stream of tasks. The field has attracted considerable interest and a diverse set of methods have been presented for overcoming this challenge. Learning without Forgetting (LwF) is one of the earliest and most frequently cited methods. It has the advantages of not requiring the storage of samples from the previous tasks, of implementation simplicity, and of being well-grounded by relying on knowledge distillation. However, the prevailing view is that while it shows a relatively small amount of forgetting when only two tasks are introduced, it fails to scale to long sequences of tasks. This paper challenges this view, by showing that using the right architecture along with a standard set of augmentations, the results obtained by LwF surpass the latest algorithms for task incremental scenario. This improved performance is demonstrated by an extensive set of experiments over CIFAR-100 and Tiny-ImageNet, where it is also shown that other methods cannot benefit as much from similar improvements.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/05/2021

Quantum Continual Learning Overcoming Catastrophic Forgetting

Catastrophic forgetting describes the fact that machine learning models ...
research
09/19/2019

ContCap: A comprehensive framework for continual image captioning

While cutting-edge image captioning systems are increasingly describing ...
research
02/19/2015

Forgetting and consolidation for incremental and cumulative knowledge acquisition systems

The application of cognitive mechanisms to support knowledge acquisition...
research
08/20/2019

Modelagem de um Problema de Dimensionamento de Lotes com Demanda Variavel e Deterministica e Efeitos de Learning e Forgetting

The main goal of this paper was to analyze the importance that the effec...
research
03/26/2023

Prototype-Sample Relation Distillation: Towards Replay-Free Continual Learning

In Continual learning (CL) balancing effective adaptation while combatin...
research
11/15/2022

Exploring the Joint Use of Rehearsal and Knowledge Distillation in Continual Learning for Spoken Language Understanding

Continual learning refers to a dynamical framework in which a model or a...
research
01/30/2018

Riemannian Walk for Incremental Learning: Understanding Forgetting and Intransigence

We study the incremental learning problem for the classification task, a...

Please sign up or login with your details

Forgot password? Click here to reset