Don't Stop Pretraining? Make Prompt-based Fine-tuning Powerful Learner

05/02/2023
by   Zhengxiang Shi, et al.
0

Language models (LMs) trained on vast quantities of unlabelled data have greatly advanced the field of natural language processing (NLP). In this study, we re-visit the widely accepted notion in NLP that continued pre-training LMs on task-related texts improves the performance of fine-tuning (FT) in downstream tasks. Through experiments on eight single-sentence tasks and eight sentence-pair tasks in both semi-supervised and fully-supervised settings, we find that conventional continued pre-training does not consistently provide benefits and can even be detrimental for sentence-pair tasks or when prompt-based FT is used. To tackle these issues, we propose Prompt-based Continued Pre-training (PCP), which combines the idea of instruction tuning with conventional continued pre-training. Our approach aims to improve the performance of prompt-based FT by presenting both task-related texts and prompt templates to LMs through unsupervised pre-training objectives before fine-tuning for the target task. Our empirical evaluations on 21 benchmarks demonstrate that the PCP consistently improves the performance of state-of-the-art prompt-based FT approaches (up to 20.1 semi-supervised and fully-supervised settings, even with only hundreds of unlabelled examples. Additionally, prompt-based FT with the PCP outperforms state-of-the-art semi-supervised approaches with greater simplicity, eliminating the need for an iterative process and extra data augmentation. Our further analysis explores the performance lower bound of the PCP and reveals that the advantages of PCP persist across different sizes of models and datasets.

READ FULL TEXT
research
06/13/2023

Rethink the Effectiveness of Text Data Augmentation: An Empirical Analysis

In recent years, language models (LMs) have made remarkable progress in ...
research
05/22/2023

Rethinking Semi-supervised Learning with Language Models

Semi-supervised learning (SSL) is a popular setting aiming to effectivel...
research
08/03/2023

Curricular Transfer Learning for Sentence Encoded Tasks

Fine-tuning language models in a downstream task is the standard approac...
research
05/24/2023

Context-Aware Transformer Pre-Training for Answer Sentence Selection

Answer Sentence Selection (AS2) is a core component for building an accu...
research
02/01/2022

A Semi-Supervised Deep Clustering Pipeline for Mining Intentions From Texts

Mining the latent intentions from large volumes of natural language inpu...
research
03/23/2022

Unsupervised Pre-Training on Patient Population Graphs for Patient-Level Predictions

Pre-training has shown success in different areas of machine learning, s...
research
05/16/2023

NightHazeFormer: Single Nighttime Haze Removal Using Prior Query Transformer

Nighttime image dehazing is a challenging task due to the presence of mu...

Please sign up or login with your details

Forgot password? Click here to reset