TransPrompt v2: A Transferable Prompting Framework for Cross-task Text Classification

08/29/2023
by   Jianing Wang, et al.
0

Text classification is one of the most imperative tasks in natural language processing (NLP). Recent advances with pre-trained language models (PLMs) have shown remarkable success on this task. However, the satisfying results obtained by PLMs heavily depend on the large amounts of task-specific labeled data, which may not be feasible in many application scenarios due to data access and privacy constraints. The recently-proposed prompt-based fine-tuning paradigm improves the performance of PLMs for few-shot text classification with task-specific templates. Yet, it is unclear how the prompting knowledge can be transferred across tasks, for the purpose of mutual reinforcement. We propose TransPrompt v2, a novel transferable prompting framework for few-shot learning across similar or distant text classification tasks. For learning across similar tasks, we employ a multi-task meta-knowledge acquisition (MMA) procedure to train a meta-learner that captures the cross-task transferable knowledge. For learning across distant tasks, we further inject the task type descriptions into the prompt, and capture the intra-type and inter-type prompt embeddings among multiple distant tasks. Additionally, two de-biasing techniques are further designed to make the trained meta-learner more task-agnostic and unbiased towards any tasks. After that, the meta-learner can be adapted to each specific task with better parameters initialization. Extensive experiments show that TransPrompt v2 outperforms single-task and cross-task strong baselines over multiple NLP tasks and datasets. We further show that the meta-learner can effectively improve the performance of PLMs on previously unseen tasks. In addition, TransPrompt v2 also outperforms strong fine-tuning baselines when learning with full training sets.

READ FULL TEXT
research
05/11/2022

Towards Unified Prompt Tuning for Few-shot Text Classification

Prompt-based fine-tuning has boosted the performance of Pre-trained Lang...
research
10/14/2021

Compressibility of Distributed Document Representations

Contemporary natural language processing (NLP) revolves around learning ...
research
04/04/2020

Knowledge Guided Metric Learning for Few-Shot Text Classification

The training of deep-learning-based text classification models relies he...
research
06/18/2023

Evolutionary Verbalizer Search for Prompt-based Few Shot Text Classification

Recent advances for few-shot text classification aim to wrap textual inp...
research
10/29/2022

STPrompt: Semantic-guided and Task-driven prompts for Effective Few-shot Classification

The effectiveness of prompt learning has been demonstrated in different ...
research
10/22/2022

Meta-learning Pathologies from Radiology Reports using Variance Aware Prototypical Networks

Large pretrained Transformer-based language models like BERT and GPT hav...
research
06/30/2023

Meta-training with Demonstration Retrieval for Efficient Few-shot Learning

Large language models show impressive results on few-shot NLP tasks. How...

Please sign up or login with your details

Forgot password? Click here to reset