Continual Learning via Bit-Level Information Preserving

05/10/2021
by   Yujun Shi, et al.
0

Continual learning tackles the setting of learning different tasks sequentially. Despite the lots of previous solutions, most of them still suffer significant forgetting or expensive memory cost. In this work, targeted at these problems, we first study the continual learning process through the lens of information theory and observe that forgetting of a model stems from the loss of information gain on its parameters from the previous tasks when learning a new task. From this viewpoint, we then propose a novel continual learning approach called Bit-Level Information Preserving (BLIP) that preserves the information gain on model parameters through updating the parameters at the bit level, which can be conveniently implemented with parameter quantization. More specifically, BLIP first trains a neural network with weight quantization on the new incoming task and then estimates information gain on each parameter provided by the task data to determine the bits to be frozen to prevent forgetting. We conduct extensive experiments ranging from classification tasks to reinforcement learning tasks, and the results show that our method produces better or on par results comparing to previous state-of-the-arts. Indeed, BLIP achieves close to zero forgetting while only requiring constant memory overheads throughout continual learning.

READ FULL TEXT

page 8

page 14

research
05/31/2018

Reinforced Continual Learning

Most artificial intelligence models have limiting ability to solve new t...
research
07/09/2020

Graph-Based Continual Learning

Despite significant advances, continual learning models still suffer fro...
research
04/26/2022

Theoretical Understanding of the Information Flow on Continual Learning Performance

Continual learning (CL) is a setting in which an agent has to learn from...
research
03/30/2020

Adaptive Group Sparse Regularization for Continual Learning

We propose a novel regularization-based continual learning method, dubbe...
research
11/14/2022

Hierarchically Structured Task-Agnostic Continual Learning

One notable weakness of current machine learning algorithms is the poor ...
research
11/03/2022

Continual Learning of Neural Machine Translation within Low Forgetting Risk Regions

This paper considers continual learning of large-scale pretrained neural...

Please sign up or login with your details

Forgot password? Click here to reset