ImpressLearn: Continual Learning via Combined Task Impressions

10/05/2022
by   Dhrupad Bhardwaj, et al.
0

This work proposes a new method to sequentially train a deep neural network on multiple tasks without suffering catastrophic forgetting, while endowing it with the capability to quickly adapt to unseen tasks. Starting from existing work on network masking (Wortsman et al., 2020), we show that simply learning a linear combination of a small number of task-specific masks (impressions) on a randomly initialized backbone network is sufficient to both retain accuracy on previously learned tasks, as well as achieve high accuracy on new tasks. In contrast to previous methods, we do not require to generate dedicated masks or contexts for each new task, instead leveraging transfer learning to keep per-task parameter overhead small. Our work illustrates the power of linearly combining individual impressions, each of which fares poorly in isolation, to achieve performance comparable to a dedicated mask. Moreover, even repeated impressions from the same task (homogeneous masks), when combined can approach the performance of heterogeneous combinations if sufficiently many impressions are used. Our approach scales more efficiently than existing methods, often requiring orders of magnitude fewer parameters and can function without modification even when task identity is missing. In addition, in the setting where task labels are not given at inference, our algorithm gives an often favorable alternative to the entropy based task-inference methods proposed in (Wortsman et al., 2020). We evaluate our method on a number of well known image classification data sets and architectures.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/23/2020

Ternary Feature Masks: continual learning without any forgetting

In this paper, we propose an approach without any forgetting to continua...
research
10/18/2022

Exclusive Supermask Subnetwork Training for Continual Learning

Continual Learning (CL) methods mainly focus on avoiding catastrophic fo...
research
01/19/2018

Piggyback: Adding Multiple Tasks to a Single, Fixed Network by Learning to Mask

This work presents a method for adding multiple tasks to a single, fixed...
research
03/27/2023

Forget-free Continual Learning with Soft-Winning SubNetworks

Inspired by Regularized Lottery Ticket Hypothesis (RLTH), which states t...
research
11/19/2021

Defeating Catastrophic Forgetting via Enhanced Orthogonal Weights Modification

The ability of neural networks (NNs) to learn and remember multiple task...
research
09/27/2020

Beneficial Perturbation Network for designing general adaptive artificial intelligence systems

The human brain is the gold standard of adaptive learning. It not only c...
research
02/22/2022

Increasing Depth of Neural Networks for Life-long Learning

Increasing neural network depth is a well-known method for improving neu...

Please sign up or login with your details

Forgot password? Click here to reset