DeepAI AI Chat
Log In Sign Up

P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks

10/14/2021
by   Xiao Liu, et al.
Tsinghua University
0

Prompt tuning, which only tunes continuous prompts with a frozen language model, substantially reduces per-task storage and memory usage at training. However, in the context of NLU, prior work reveals that prompt tuning does not perform well for normal-sized pre-trained models. We also find that existing methods of prompt tuning cannot handle hard sequence tagging tasks, indicating a lack of universality. We present a novel empirical finding that properly optimized prompt tuning can be universally effective across a wide range of model scales and NLU tasks. It matches the performance of fine-tuning while having only 0.1%-3% tuned parameters. Our method P-Tuning v2 is not a new method, but a version of prefix-tuning <cit.> optimized and adapted for NLU. Given the universality and simplicity of P-Tuning v2, we believe it can serve as an alternative to fine-tuning and a strong baseline for future research.

READ FULL TEXT

page 1

page 2

page 3

page 4

03/23/2022

Visual Prompt Tuning

The current modus operandi in adapting pre-trained models involves updat...
08/06/2020

Better Fine-Tuning by Reducing Representational Collapse

Although widely adopted, existing approaches for fine-tuning pre-trained...
10/11/2022

A Kernel-Based View of Language Model Fine-Tuning

It has become standard to solve NLP tasks by fine-tuning pre-trained lan...
05/23/2022

When does Parameter-Efficient Transfer Learning Work for Machine Translation?

Parameter-efficient fine-tuning methods (PEFTs) offer the promise of ada...
05/14/2018

Parser Training with Heterogeneous Treebanks

How to make the most of multiple heterogeneous treebanks when training a...
03/18/2021

GPT Understands, Too

While GPTs with traditional fine-tuning fail to achieve strong results o...
12/18/2021

Improving Learning-to-Defer Algorithms Through Fine-Tuning

The ubiquity of AI leads to situations where humans and AI work together...

Code Repositories

P-tuning

A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.


view repo

P-tuning-v2

An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks


view repo