Towards Unified Prompt Tuning for Few-shot Text Classification

05/11/2022
by   Jianing Wang, et al.
0

Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-shot text classification by employing task-specific prompts. Yet, PLMs are unfamiliar with prompt-style expressions during pre-training, which limits the few-shot learning performance on downstream tasks. It would be desirable if the models can acquire some prompting knowledge before adaptation to specific NLP tasks. We present the Unified Prompt Tuning (UPT) framework, leading to better few-shot text classification for BERT-style models by explicitly capturing prompting semantics from non-target NLP datasets. In UPT, a novel paradigm Prompt-Options-Verbalizer is proposed for joint prompt learning across different NLP tasks, forcing PLMs to capture task-invariant prompting knowledge. We further design a self-supervised task named Knowledge-enhanced Selective Masked Language Modeling to improve the PLM's generalization abilities for accurate adaptation to previously unseen tasks. After multi-task learning across multiple tasks, the PLM can be better prompt-tuned towards any dissimilar target tasks in low-resourced settings. Experiments over a variety of NLP tasks show that UPT consistently outperforms state-of-the-arts for prompt-based fine-tuning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/29/2020

Meta Fine-Tuning Neural Language Models for Multi-Domain Text Mining

Pre-trained neural language models bring significant improvement for var...
research
08/29/2023

TransPrompt v2: A Transferable Prompting Framework for Cross-task Text Classification

Text classification is one of the most imperative tasks in natural langu...
research
05/24/2021

PTR: Prompt Tuning with Rules for Text Classification

Fine-tuned pre-trained language models (PLMs) have achieved awesome perf...
research
01/27/2022

Grad2Task: Improved Few-shot Text Classification Using Gradients for Task Representation

Large pretrained language models (LMs) like BERT have improved performan...
research
10/15/2021

Exploring Low-dimensional Intrinsic Task Subspace via Prompt Tuning

How can pre-trained language models (PLMs) learn universal representatio...
research
10/13/2022

Can Demographic Factors Improve Text Classification? Revisiting Demographic Adaptation in the Age of Transformers

Demographic factors (e.g., gender or age) shape our language. Previous w...
research
03/01/2022

Investigating Selective Prediction Approaches Across Several Tasks in IID, OOD, and Adversarial Settings

In order to equip NLP systems with selective prediction capability, seve...

Please sign up or login with your details

Forgot password? Click here to reset