STT: Soft Template Tuning for Few-Shot Adaptation

07/18/2022
by   Ping Yu, et al.
5

Prompt tuning has been an extremely effective tool to adapt a pre-trained model to downstream tasks. However, standard prompt-based methods mainly consider the case of sufficient data of downstream tasks. It is still unclear whether the advantage can be transferred to the few-shot regime, where only limited data are available for each downstream task. Although some works have demonstrated the potential of prompt-tuning under the few-shot setting, the main stream methods via searching discrete prompts or tuning soft prompts with limited data are still very challenging. Through extensive empirical studies, we find that there is still a gap between prompt tuning and fully fine-tuning for few-shot learning. To bridge the gap, we propose a new prompt-tuning framework, called Soft Template Tuning (STT). STT combines manual and auto prompts, and treats downstream classification tasks as a masked language modeling task. Comprehensive evaluation on different settings suggests STT can close the gap between fine-tuning and prompt-based methods without introducing additional parameters. Significantly, it can even outperform the time- and resource-consuming fine-tuning method on sentiment classification tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/09/2021

PPT: Pre-trained Prompt Tuning for Few-shot Learning

Prompts for pre-trained language models (PLMs) have shown remarkable per...
research
04/07/2023

Revisiting Automated Prompting: Are We Actually Doing Better?

Current literature demonstrates that Large Language Models (LLMs) are gr...
research
10/10/2022

XPrompt: Exploring the Extreme of Prompt Tuning

Prompt tuning learns soft prompts to condition frozen Pre-trained Langua...
research
04/18/2021

The Power of Scale for Parameter-Efficient Prompt Tuning

In this work, we explore "prompt tuning", a simple yet effective mechani...
research
03/06/2023

Dynamic Prompting: A Unified Framework for Prompt Tuning

It has been demonstrated that prompt tuning is highly effective in effic...
research
08/01/2022

giMLPs: Gate with Inhibition Mechanism in MLPs

This paper presents a new model architecture, gate with inhibition MLP (...
research
11/11/2022

Soft-Landing Strategy for Alleviating the Task Discrepancy Problem in Temporal Action Localization Tasks

Temporal Action Localization (TAL) methods typically operate on top of f...

Please sign up or login with your details

Forgot password? Click here to reset