Adaptive Compression-based Lifelong Learning

07/23/2019
by   Shivangi Srivastava, et al.
0

The problem of a deep learning model losing performance on a previously learned task when fine-tuned to a new one is a phenomenon known as Catastrophic forgetting. There are two major ways to mitigate this problem: either preserving activations of the initial network during training with a new task; or restricting the new network activations to remain close to the initial ones. The latter approach falls under the denomination of lifelong learning, where the model is updated in a way that it performs well on both old and new tasks, without having access to the old task's training samples anymore. Recently, approaches like pruning networks for freeing network capacity during sequential learning of tasks have been gaining in popularity. Such approaches allow learning small networks while making redundant parameters available for the next tasks. The common problem encountered with these approaches is that the pruning percentage is hard-coded, irrespective of the number of samples, of the complexity of the learning task and of the number of classes in the dataset. We propose a method based on Bayesian optimization to perform adaptive compression/pruning of the network and show its effectiveness in lifelong learning. Our method learns to perform heavy pruning for small and/or simple datasets while using milder compression rates for large and/or complex data. Experiments on classification and semantic segmentation demonstrate the applicability of learning network compression, where we are able to effectively preserve performances along sequences of tasks of varying complexity.

READ FULL TEXT

page 9

page 10

research
09/29/2020

One Person, One Model, One World: Learning Continual User Representation without Forgetting

Learning generic user representations which can then be applied to other...
research
06/21/2021

Iterative Network Pruning with Uncertainty Regularization for Lifelong Sentiment Classification

Lifelong learning capabilities are crucial for sentiment classifiers to ...
research
11/15/2017

PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning

This paper presents a method for adding multiple tasks to a single deep ...
research
07/17/2021

Continual Learning for Task-oriented Dialogue System with Iterative Network Pruning, Expanding and Masking

This ability to learn consecutive tasks without forgetting how to perfor...
research
03/23/2023

Adaptive Regularization for Class-Incremental Learning

Class-Incremental Learning updates a deep classifier with new categories...
research
10/29/2018

Incremental Learning for Semantic Segmentation of Large-Scale Remote Sensing Data

In spite of remarkable success of the convolutional neural networks on s...
research
08/14/2023

Ada-QPacknet – adaptive pruning with bit width reduction as an efficient continual learning method without forgetting

Continual Learning (CL) is a process in which there is still huge gap be...

Please sign up or login with your details

Forgot password? Click here to reset