Dynamic Prompting: A Unified Framework for Prompt Tuning

03/06/2023
by   Xianjun Yang, et al.
0

It has been demonstrated that prompt tuning is highly effective in efficiently eliciting knowledge from language models (LMs). However, the prompt tuning still lags behind fine-tuning, especially when the LMs are small. P-tuning v2 (Liu et al., 2021b) makes it comparable with finetuning by adding continuous prompts for every layer of the pre-trained model. However, prepending fixed soft prompts for all instances, regardless of their discrepancy, is doubtful. In particular, the inserted prompt position, length, and the representations of prompts for diversified instances through different tasks could all affect the prompt tuning performance. To fill this gap, we propose dynamic prompting (DP): the position, length, and prompt representation can all be dynamically optimized with respect to different tasks and instances. We conduct comprehensive experiments on the SuperGlue benchmark to validate our hypothesis and demonstrate substantial improvements. We also derive a unified framework for supporting our dynamic prompting strategy. In particular, we use a simple learning network and Gumble- Softmax for learning instance-dependent guidance. Experimental results show that simple instance-level position-aware soft prompts can improve the classification accuracy of up to 6 points on average on five datasets, reducing its gap with fine-tuning. Besides, we also prove its universal usefulness under full-data, few-shot, and multitask regimes. Combining them together can even further unleash the power of DP, narrowing the distance between finetuning.

READ FULL TEXT

page 3

page 7

research
11/16/2022

Parameter-Efficient Tuning on Layer Normalization for Pre-trained Language Models

Conventional fine-tuning encounters increasing difficulties given the si...
research
12/21/2022

SPT: Semi-Parametric Prompt Tuning for Multitask Prompted Learning

Pre-trained large language models can efficiently interpolate human-writ...
research
07/18/2022

STT: Soft Template Tuning for Few-Shot Adaptation

Prompt tuning has been an extremely effective tool to adapt a pre-traine...
research
10/10/2022

XPrompt: Exploring the Extreme of Prompt Tuning

Prompt tuning learns soft prompts to condition frozen Pre-trained Langua...
research
10/26/2022

EW-Tune: A Framework for Privately Fine-Tuning Large Language Models with Differential Privacy

Pre-trained Large Language Models (LLMs) are an integral part of modern ...
research
03/16/2022

Thinking about GPT-3 In-Context Learning for Biomedical IE? Think Again

The strong few-shot in-context learning capability of large pre-trained ...
research
05/06/2023

Residual Prompt Tuning: Improving Prompt Tuning with Residual Reparameterization

Prompt tuning is one of the successful approaches for parameter-efficien...

Please sign up or login with your details

Forgot password? Click here to reset