Task Difficulty Aware Parameter Allocation Regularization for Lifelong Learning

04/11/2023
by   Wenjin Wang, et al.
0

Parameter regularization or allocation methods are effective in overcoming catastrophic forgetting in lifelong learning. However, they solve all tasks in a sequence uniformly and ignore the differences in the learning difficulty of different tasks. So parameter regularization methods face significant forgetting when learning a new task very different from learned tasks, and parameter allocation methods face unnecessary parameter overhead when learning simple tasks. In this paper, we propose the Parameter Allocation Regularization (PAR), which adaptively select an appropriate strategy for each task from parameter allocation and regularization based on its learning difficulty. A task is easy for a model that has learned tasks related to it and vice versa. We propose a divergence estimation method based on the Nearest-Prototype distance to measure the task relatedness using only features of the new task. Moreover, we propose a time-efficient relatedness-aware sampling-based architecture search strategy to reduce the parameter overhead for allocation. Experimental results on multiple benchmarks demonstrate that, compared with SOTAs, our method is scalable and significantly reduces the model's redundancy while improving the model's performance. Further qualitative analysis indicates that PAR obtains reasonable task-relatedness.

READ FULL TEXT

page 7

page 8

research
11/28/2017

Block Neural Network Avoids Catastrophic Forgetting When Learning Multiple Task

In the present work we propose a Deep Feed Forward network architecture ...
research
04/24/2021

Piggyback GAN: Efficient Lifelong Learning for Image Conditioned Generation

Humans accumulate knowledge in a lifelong fashion. Modern deep neural ne...
research
12/03/2019

Overcoming Catastrophic Forgetting by Generative Regularization

In this paper, we propose a new method to overcome catastrophic forgetti...
research
03/17/2023

Fixed Design Analysis of Regularization-Based Continual Learning

We consider a continual learning (CL) problem with two linear regression...
research
03/19/2020

Lifelong Learning with Searchable Extension Units

Lifelong learning remains an open problem. One of its main difficulties ...
research
11/03/2022

Continual Learning of Neural Machine Translation within Low Forgetting Risk Regions

This paper considers continual learning of large-scale pretrained neural...
research
07/14/2022

E2-AEN: End-to-End Incremental Learning with Adaptively Expandable Network

Expandable networks have demonstrated their advantages in dealing with c...

Please sign up or login with your details

Forgot password? Click here to reset