Rethinking Task-Incremental Learning Baselines

05/23/2022
by   Md Sazzad Hossain, et al.
0

It is common to have continuous streams of new data that need to be introduced in the system in real-world applications. The model needs to learn newly added capabilities (future tasks) while retaining the old knowledge (past tasks). Incremental learning has recently become increasingly appealing for this problem. Task-incremental learning is a kind of incremental learning where task identity of newly included task (a set of classes) remains known during inference. A common goal of task-incremental methods is to design a network that can operate on minimal size, maintaining decent performance. To manage the stability-plasticity dilemma, different methods utilize replay memory of past tasks, specialized hardware, regularization monitoring etc. However, these methods are still less memory efficient in terms of architecture growth or input data costs. In this study, we present a simple yet effective adjustment network (SAN) for task incremental learning that achieves near state-of-the-art performance while using minimal architectural size without using memory instances compared to previous state-of-the-art approaches. We investigate this approach on both 3D point cloud object (ModelNet40) and 2D image (CIFAR10, CIFAR100, MiniImageNet, MNIST, PermutedMNIST, notMNIST, SVHN, and FashionMNIST) recognition tasks and establish a strong baseline result for a fair comparison with existing methods. On both 2D and 3D domains, we also observe that SAN is primarily unaffected by different task orders in a task-incremental setting.

READ FULL TEXT

page 2

page 4

research
03/22/2021

ZS-IL: Looking Back on Learned ExperiencesFor Zero-Shot Incremental Learning

Classical deep neural networks are limited in their ability to learn fro...
research
03/24/2023

Remind of the Past: Incremental Learning with Analogical Prompts

Although data-free incremental learning methods are memory-friendly, acc...
research
06/22/2018

Continuous Learning in Single-Incremental-Task Scenarios

It was recently shown that architectural, regularization and rehearsal s...
research
05/26/2022

A Model or 603 Exemplars: Towards Memory-Efficient Class-Incremental Learning

Real-world applications require the classification model to adapt to new...
research
10/15/2021

Towards Better Plasticity-Stability Trade-off in Incremental Learning: A simple Linear Connector

Plasticity-stability dilemma is a main problem for incremental learning,...
research
11/24/2022

Neural Weight Search for Scalable Task Incremental Learning

Task incremental learning aims to enable a system to maintain its perfor...
research
03/22/2023

Dense Network Expansion for Class Incremental Learning

The problem of class incremental learning (CIL) is considered. State-of-...

Please sign up or login with your details

Forgot password? Click here to reset