Multi-Domain Multi-Task Rehearsal for Lifelong Learning

12/14/2020
by   Fan lyu, et al.
0

Rehearsal, seeking to remind the model by storing old knowledge in lifelong learning, is one of the most effective ways to mitigate catastrophic forgetting, i.e., biased forgetting of previous knowledge when moving to new tasks. However, the old tasks of the most previous rehearsal-based methods suffer from the unpredictable domain shift when training the new task. This is because these methods always ignore two significant factors. First, the Data Imbalance between the new task and old tasks that makes the domain of old tasks prone to shift. Second, the Task Isolation among all tasks will make the domain shift toward unpredictable directions; To address the unpredictable domain shift, in this paper, we propose Multi-Domain Multi-Task (MDMT) rehearsal to train the old tasks and new task parallelly and equally to break the isolation among tasks. Specifically, a two-level angular margin loss is proposed to encourage the intra-class/task compactness and inter-class/task discrepancy, which keeps the model from domain chaos. In addition, to further address domain shift of the old tasks, we propose an optional episodic distillation loss on the memory to anchor the knowledge for each old task. Experiments on benchmark datasets validate the proposed approach can effectively mitigate the unpredictable domain shift.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/29/2022

Resolving Task Confusion in Dynamic Expansion Architectures for Class Incremental Learning

The dynamic expansion architecture is becoming popular in class incremen...
research
03/22/2022

Federated Class-Incremental Learning

Federated learning (FL) has attracted growing attention via data-private...
research
08/17/2023

Task Relation Distillation and Prototypical Pseudo Label for Incremental Named Entity Recognition

Incremental Named Entity Recognition (INER) involves the sequential lear...
research
03/10/2022

Online Deep Metric Learning via Mutual Distillation

Deep metric learning aims to transform input data into an embedding spac...
research
06/27/2021

Learning without Forgetting for 3D Point Cloud Objects

When we fine-tune a well-trained deep learning model for a new set of cl...
research
04/10/2023

Federated Incremental Semantic Segmentation

Federated learning-based semantic segmentation (FSS) has drawn widesprea...
research
08/10/2021

Lifelong Intent Detection via Multi-Strategy Rebalancing

Conventional Intent Detection (ID) models are usually trained offline, w...

Please sign up or login with your details

Forgot password? Click here to reset