P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks

10/14/2021
by   Xiao Liu, et al.
0

Prompt tuning, which only tunes continuous prompts with a frozen language model, substantially reduces per-task storage and memory usage at training. However, in the context of NLU, prior work reveals that prompt tuning does not perform well for normal-sized pre-trained models. We also find that existing methods of prompt tuning cannot handle hard sequence tagging tasks, indicating a lack of universality. We present a novel empirical finding that properly optimized prompt tuning can be universally effective across a wide range of model scales and NLU tasks. It matches the performance of fine-tuning while having only 0.1%-3% tuned parameters. Our method P-Tuning v2 is not a new method, but a version of prefix-tuning <cit.> optimized and adapted for NLU. Given the universality and simplicity of P-Tuning v2, we believe it can serve as an alternative to fine-tuning and a strong baseline for future research.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/23/2022

Visual Prompt Tuning

The current modus operandi in adapting pre-trained models involves updat...
research
08/06/2020

Better Fine-Tuning by Reducing Representational Collapse

Although widely adopted, existing approaches for fine-tuning pre-trained...
research
10/11/2022

A Kernel-Based View of Language Model Fine-Tuning

It has become standard to solve NLP tasks by fine-tuning pre-trained lan...
research
05/23/2022

When does Parameter-Efficient Transfer Learning Work for Machine Translation?

Parameter-efficient fine-tuning methods (PEFTs) offer the promise of ada...
research
05/18/2023

Ahead-of-Time P-Tuning

In this paper, we propose Ahead-of-Time (AoT) P-Tuning, a novel paramete...
research
07/03/2023

Surgical fine-tuning for Grape Bunch Segmentation under Visual Domain Shifts

Mobile robots will play a crucial role in the transition towards sustain...
research
05/14/2018

Parser Training with Heterogeneous Treebanks

How to make the most of multiple heterogeneous treebanks when training a...

Please sign up or login with your details

Forgot password? Click here to reset