Different Strokes for Different Folks: Investigating Appropriate Further Pre-training Approaches for Diverse Dialogue Tasks

09/14/2021
by   Yao Qiu, et al.
0

Loading models pre-trained on the large-scale corpus in the general domain and fine-tuning them on specific downstream tasks is gradually becoming a paradigm in Natural Language Processing. Previous investigations prove that introducing a further pre-training phase between pre-training and fine-tuning phases to adapt the model on the domain-specific unlabeled data can bring positive effects. However, most of these further pre-training works just keep running the conventional pre-training task, e.g., masked language model, which can be regarded as the domain adaptation to bridge the data distribution gap. After observing diverse downstream tasks, we suggest that different tasks may also need a further pre-training phase with appropriate training tasks to bridge the task formulation gap. To investigate this, we carry out a study for improving multiple task-oriented dialogue downstream tasks through designing various tasks at the further pre-training phase. The experiment shows that different downstream tasks prefer different further pre-training tasks, which have intrinsic correlation and most further pre-training tasks significantly improve certain target tasks rather than all. Our investigation indicates that it is of great importance and effectiveness to design appropriate further pre-training tasks modeling specific information that benefit downstream tasks. Besides, we present multiple constructive empirical conclusions for enhancing task-oriented dialogues.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/24/2022

Using Selective Masking as a Bridge between Pre-training and Fine-tuning

Pre-training a language model and then fine-tuning it for downstream tas...
research
10/12/2020

Multi-Stage Pre-training for Low-Resource Domain Adaptation

Transfer learning techniques are particularly useful in NLP tasks where ...
research
12/09/2022

Benchmarking Self-Supervised Learning on Diverse Pathology Datasets

Computational pathology can lead to saving human lives, but models are a...
research
06/06/2022

Domain-specific Language Pre-training for Dialogue Comprehension on Clinical Inquiry-Answering Conversations

There is growing interest in the automated extraction of relevant inform...
research
06/21/2023

SIFTER: A Task-specific Alignment Strategy for Enhancing Sentence Embeddings

The paradigm of pre-training followed by fine-tuning on downstream tasks...
research
10/02/2022

EUCLID: Towards Efficient Unsupervised Reinforcement Learning with Multi-choice Dynamics Model

Unsupervised reinforcement learning (URL) poses a promising paradigm to ...
research
07/20/2023

PASTA: Pretrained Action-State Transformer Agents

Self-supervised learning has brought about a revolutionary paradigm shif...

Please sign up or login with your details

Forgot password? Click here to reset