Domain Incremental Lifelong Learning in an Open World

05/11/2023
by   Yi Dai, et al.
0

Lifelong learning (LL) is an important ability for NLP models to learn new tasks continuously. Architecture-based approaches are reported to be effective implementations for LL models. However, it is non-trivial to extend previous approaches to domain incremental LL scenarios since they either require access to task identities in the testing phase or cannot handle samples from unseen tasks. In this paper, we propose Diana: a dynamic architecture-based lifelong learning model that tries to learn a sequence of tasks with a prompt-enhanced language model. Four types of hierarchically organized prompts are used in Diana to capture knowledge from different granularities. Specifically, we dedicate task-level prompts to capture task-specific knowledge to retain high LL performances and maintain instance-level prompts to learn knowledge shared across input samples to improve the model's generalization performance. Moreover, we dedicate separate prompts to explicitly model unseen tasks and introduce a set of prompt key vectors to facilitate knowledge sharing between tasks. Extensive experiments demonstrate that Diana outperforms state-of-the-art LL models, especially in handling unseen tasks. We release the code and data at <https://github.com/AlibabaResearch/DAMO-ConvAI/tree/main/diana>.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/31/2022

Lifelong Learning for Question Answering with Hierarchical Prompts

QA models with lifelong learning (LL) abilities are important for practi...
research
05/11/2023

Long-Tailed Question Answering in an Open World

Real-world data often have an open long-tailed distribution, and buildin...
research
06/19/2023

Knowledge Transfer-Driven Few-Shot Class-Incremental Learning

Few-shot class-incremental learning (FSCIL) aims to continually learn ne...
research
09/18/2023

DFIL: Deepfake Incremental Learning by Exploiting Domain-invariant Forgery Clues

The malicious use and widespread dissemination of deepfake pose a signif...
research
10/16/2021

HRKD: Hierarchical Relational Knowledge Distillation for Cross-domain Language Model Compression

On many natural language processing tasks, large pre-trained language mo...
research
03/23/2021

Lifelong Person Re-Identification via Adaptive Knowledge Accumulation

Person ReID methods always learn through a stationary domain that is fix...
research
08/27/2022

Anti-Retroactive Interference for Lifelong Learning

Humans can continuously learn new knowledge. However, machine learning m...

Please sign up or login with your details

Forgot password? Click here to reset