Model ensemble instead of prompt fusion: a sample-specific knowledge transfer method for few-shot prompt tuning

10/23/2022
by   Xiangyu Peng, et al.
0

Prompt tuning approaches, which learn task-specific soft prompts for a downstream task conditioning on frozen pre-trained models, have attracted growing interest due to its parameter efficiency. With large language models and sufficient training data, prompt tuning performs comparably to full-model tuning. However, with limited training samples in few-shot settings, prompt tuning fails to match the performance of full-model fine-tuning. In this work, we focus on improving the few-shot performance of prompt tuning by transferring knowledge from soft prompts of source tasks. Recognizing the good generalization capabilities of ensemble methods in low-data regime, we first experiment and show that a simple ensemble of model predictions based on different source prompts, outperforms existing multi-prompt knowledge transfer approaches such as source prompt fusion in the few-shot setting. Motivated by this observation, we further investigate model ensembles and propose Sample-specific Ensemble of Source Models (SESoM). SESoM learns to adjust the contribution of each source model for each target sample separately when ensembling source model outputs. Through this way, SESoM inherits the superior generalization of model ensemble approaches and simultaneously captures the sample-specific competence of each source prompt. We conduct experiments across a diverse set of eight NLP tasks using models of different scales (T5-base, large, XL) and find that SESoM consistently outperforms the existing models of the same as well as larger parametric scale by a large margin.

READ FULL TEXT

page 18

page 19

page 20

research
10/15/2021

SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer

As pre-trained language models have gotten larger, there has been growin...
research
10/21/2022

Clip-Tuning: Towards Derivative-free Prompt Learning with a Mixture of Rewards

Derivative-free prompt learning has emerged as a lightweight alternative...
research
04/18/2021

The Power of Scale for Parameter-Efficient Prompt Tuning

In this work, we explore "prompt tuning", a simple yet effective mechani...
research
03/06/2023

Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning

Prompt tuning, in which a base pretrained model is adapted to each task ...
research
04/03/2022

PERFECT: Prompt-free and Efficient Few-shot Learning with Language Models

Current methods for few-shot fine-tuning of pretrained masked language m...
research
10/06/2022

Improving the Sample Efficiency of Prompt Tuning with Domain Adaptation

Prompt tuning, or the conditioning of a frozen pretrained language model...
research
04/16/2021

What to Pre-Train on? Efficient Intermediate Task Selection

Intermediate task fine-tuning has been shown to culminate in large trans...

Please sign up or login with your details

Forgot password? Click here to reset