Parameter-Efficient Tuning on Layer Normalization for Pre-trained Language Models

11/16/2022
by   Wang Qi, et al.
0

Conventional fine-tuning encounters increasing difficulties given the size of current Pre-trained Language Models, which makes parameter-efficient tuning become the focal point of frontier research. Previous methods in this field add tunable adapters into MHA or/and FFN of Transformer blocks to enable PLMs achieve transferability. However, as an important part of Transformer architecture, the power of layer normalization for parameter-efficent tuning is ignored. In this paper, we first propose LN-tuning, by tuning the gain and bias term of Layer Normalization module with only 0.03% parameters, which is of high time-efficency and significantly superior to baselines which are less than 0.1% tunable parameters. Further, we study the unified framework of combining LN-tuning with previous ones and we find that: (1) the unified framework of combining prefix-tuning, the adapter-based method working on MHA, and LN-tuning achieves SOTA performance. (2) unified framework which tunes MHA and LayerNorm simultaneously can get performance improvement but those which tune FFN and LayerNorm simultaneous will cause performance decrease. Ablation study validates LN-tuning is of no abundant parameters and gives a further understanding of it.

READ FULL TEXT
research
10/08/2021

Towards a Unified View of Parameter-Efficient Transfer Learning

Fine-tuning large pre-trained language models on downstream tasks has be...
research
03/06/2023

Dynamic Prompting: A Unified Framework for Prompt Tuning

It has been demonstrated that prompt tuning is highly effective in effic...
research
10/14/2021

UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning

Conventional fine-tuning of pre-trained language models tunes all model ...
research
05/18/2023

Ahead-of-Time P-Tuning

In this paper, we propose Ahead-of-Time (AoT) P-Tuning, a novel paramete...
research
05/06/2023

Residual Prompt Tuning: Improving Prompt Tuning with Residual Reparameterization

Prompt tuning is one of the successful approaches for parameter-efficien...
research
06/03/2023

SpeechGen: Unlocking the Generative Power of Speech Language Models with Prompts

Large language models (LLMs) have gained considerable attention for Arti...
research
05/24/2022

AdaMix: Mixture-of-Adapter for Parameter-efficient Tuning of Large Language Models

Fine-tuning large-scale pre-trained language models to downstream tasks ...

Please sign up or login with your details

Forgot password? Click here to reset