Log In Sign Up

Improving the Sample Efficiency of Prompt Tuning with Domain Adaptation

by   Xu Guo, et al.

Prompt tuning, or the conditioning of a frozen pretrained language model (PLM) with soft prompts learned from data, has demonstrated impressive performance on a wide range of NLP tasks. However, prompt tuning requires a large training dataset to be effective and is outperformed by finetuning the entire PLM in data-scarce regimes. Previous work <cit.> proposed to transfer soft prompts pretrained on the source domain to the target domain. In this paper, we explore domain adaptation for prompt tuning, a problem setting where unlabeled data from the target domain are available during pretraining. We propose bOosting Prompt TunIng with doMain Adaptation (OPTIMA), which regularizes the decision boundary to be smooth around regions where source and target data distributions are similar. Extensive experiments demonstrate that OPTIMA significantly enhances the transferability and sample-efficiency of prompt tuning compared to strong baselines. Moreover, in few-shot settings, OPTIMA exceeds full-model tuning by a large margin.


page 7

page 8

page 13


Divergence Optimization for Noisy Universal Domain Adaptation

Universal domain adaptation (UniDA) has been proposed to transfer knowle...

Source-Free Domain Adaptation for Question Answering with Masked Self-training

Most previous unsupervised domain adaptation (UDA) methods for question ...

PERL: Pivot-based Domain Adaptation for Pre-trained Deep Contextualized Embedding Models

Pivot-based neural representation models have lead to significant progre...

Multiple-Source Domain Adaptation via Coordinated Domain Encoders and Paired Classifiers

We present a novel multiple-source unsupervised model for text classific...

A Theory of Label Propagation for Subpopulation Shift

One of the central problems in machine learning is domain adaptation. Un...

Unsupervised Finetuning

This paper studies "unsupervised finetuning", the symmetrical problem of...