Incremental Task Learning with Incremental Rank Updates

07/19/2022
by   Rakib Hyder, et al.
0

Incremental Task learning (ITL) is a category of continual learning that seeks to train a single network for multiple tasks (one after another), where training data for each task is only available during the training of that task. Neural networks tend to forget older tasks when they are trained for the newer tasks; this property is often known as catastrophic forgetting. To address this issue, ITL methods use episodic memory, parameter regularization, masking and pruning, or extensible network structures. In this paper, we propose a new incremental task learning framework based on low-rank factorization. In particular, we represent the network weights for each layer as a linear combination of several rank-1 matrices. To update the network for a new task, we learn a rank-1 (or low-rank) matrix and add that to the weights of every layer. We also introduce an additional selector vector that assigns different weights to the low-rank matrices learned for the previous tasks. We show that our approach performs better than the current state-of-the-art methods in terms of accuracy and forgetting. Our method also offers better memory efficiency compared to episodic memory- and mask-based approaches. Our code will be available at https://github.com/CSIPlab/task-increment-rank-update.git

READ FULL TEXT
research
10/18/2022

Exclusive Supermask Subnetwork Training for Continual Learning

Continual Learning (CL) methods mainly focus on avoiding catastrophic fo...
research
12/04/2018

Overcoming Catastrophic Forgetting by Soft Parameter Pruning

Catastrophic forgetting is a challenge issue in continual learning when ...
research
06/20/2023

InRank: Incremental Low-Rank Learning

The theory of greedy low-rank learning (GLRL) aims to explain the impres...
research
09/03/2020

Compression-aware Continual Learning using Singular Value Decomposition

We propose a compression based continual task learning method that can d...
research
01/24/2022

PaRT: Parallel Learning Towards Robust and Transparent AI

This paper takes a parallel learning approach for robust and transparent...
research
03/23/2023

Learning a Practical SDR-to-HDRTV Up-conversion using New Dataset and Degradation Models

In media industry, the demand of SDR-to-HDRTV up-conversion arises when ...
research
11/24/2022

Neural Weight Search for Scalable Task Incremental Learning

Task incremental learning aims to enable a system to maintain its perfor...

Please sign up or login with your details

Forgot password? Click here to reset