PTP: Boosting Stability and Performance of Prompt Tuning with Perturbation-Based Regularizer

05/03/2023
by   Lichang Chen, et al.
0

Recent studies show that prompt tuning can better leverage the power of large language models than fine-tuning on downstream natural language understanding tasks. However, the existing prompt tuning methods have training instability issues, as the variance of scores under different random seeds is quite large. To address this critical problem, we first investigate and find that the loss landscape of vanilla prompt tuning is precipitous when it is visualized, where a slight change of input data can cause a big fluctuation in the loss landscape. This is an essential factor that leads to the instability of prompt tuning. Based on this observation, we introduce perturbation-based regularizers, which can smooth the loss landscape, into prompt tuning. We propose a new algorithm, called Prompt Tuning with Perturbation-based regularizer (PTP), which can not only alleviate training instability dramatically but also boost the performance of prompt tuning. We design two kinds of perturbation-based regularizers, including random-noise-based and adversarial-based. In particular, our proposed perturbations are flexible on both text space and embedding space. Extensive experiments show the effectiveness of our proposed methods in stabilizing the training. Our new algorithms improve the state-of-the-art prompt tuning methods by 1.94% and 2.34% on SuperGLUE and FewGLUE benchmarks, respectively.

READ FULL TEXT
research
05/02/2022

Robust Fine-tuning via Perturbation and Interpolation from In-batch Instances

Fine-tuning pretrained language models (PLMs) on downstream tasks has be...
research
05/23/2022

Improving language models fine-tuning with representation consistency targets

Fine-tuning contextualized representations learned by pre-trained langua...
research
09/28/2020

Domain Adversarial Fine-Tuning as an Effective Regularizer

In Natural Language Processing (NLP), pre-trained language models (LMs) ...
research
12/12/2022

RPN: A Word Vector Level Data Augmentation Algorithm in Deep Learning for Language Understanding

This paper presents a new data augmentation algorithm for natural unders...
research
02/18/2023

Bag of Tricks for Effective Language Model Pretraining and Downstream Adaptation: A Case Study on GLUE

This technical report briefly describes our JDExplore d-team's submissio...
research
06/19/2023

Adversarial Robustness of Prompt-based Few-Shot Learning for Natural Language Understanding

State-of-the-art few-shot learning (FSL) methods leverage prompt-based f...
research
04/26/2023

Concept-Monitor: Understanding DNN training through individual neurons

In this work, we propose a general framework called Concept-Monitor to h...

Please sign up or login with your details

Forgot password? Click here to reset