Better Fine-Tuning by Reducing Representational Collapse

08/06/2020
by   Armen Aghajanyan, et al.
2

Although widely adopted, existing approaches for fine-tuning pre-trained language models have been shown to be unstable across hyper-parameter settings, motivating recent work on trust region methods. In this paper, we present a simplified and efficient method rooted in trust region theory that replaces previously used adversarial objectives with parametric noise (sampling from either a normal or uniform distribution), thereby discouraging representation change during fine-tuning when possible without hurting performance. We also introduce a new analysis to motivate the use of trust region methods more generally, by studying representational collapse; the degradation of generalizable representations from pre-trained models as they are fine-tuned for a specific end task. Extensive experiments show that our fine-tuning method matches or exceeds the performance of previous trust region methods on a range of understanding and generation tasks (including DailyMail/CNN, Gigaword, Reddit TIFU, and the GLUE benchmark), while also being much faster. We also show that it is less prone to representation collapse; the pre-trained models maintain more generalizable representations every time they are fine-tuned.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/23/2022

Improving language models fine-tuning with representation consistency targets

Fine-tuning contextualized representations learned by pre-trained langua...
research
02/09/2023

Knowledge is a Region in Weight Space for Fine-tuned Language Models

Research on neural networks has largely focused on understanding a singl...
research
05/11/2021

Scene Understanding for Autonomous Driving

To detect and segment objects in images based on their content is one of...
research
08/31/2021

Backdoor Attacks on Pre-trained Models by Layerwise Weight Poisoning

Pre-Trained Models have been widely applied and recently proved vulnerab...
research
10/14/2021

P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks

Prompt tuning, which only tunes continuous prompts with a frozen languag...
research
07/31/2023

Revisiting the Parameter Efficiency of Adapters from the Perspective of Precision Redundancy

Current state-of-the-art results in computer vision depend in part on fi...
research
06/06/2023

Büyük dil modellerinin Türkçe verisetleri ile eğitilmesi ve ince ayarlanması

Large language models have advanced enormously, gained vast attraction a...

Please sign up or login with your details

Forgot password? Click here to reset