Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning

12/22/2020
by   Armen Aghajanyan, et al.
0

Although pretrained language models can be fine-tuned to produce state-of-the-art results for a very wide range of language understanding tasks, the dynamics of this process are not well understood, especially in the low data regime. Why can we use relatively vanilla gradient descent algorithms (e.g., without strong regularization) to tune a model with hundreds of millions of parameters on datasets with only hundreds or thousands of labeled examples? In this paper, we argue that analyzing fine-tuning through the lens of intrinsic dimension provides us with empirical and theoretical intuitions to explain this remarkable phenomenon. We empirically show that common pre-trained models have a very low intrinsic dimension; in other words, there exists a low dimension reparameterization that is as effective for fine-tuning as the full parameter space. For example, by optimizing only 200 trainable parameters randomly projected back into the full space, we can tune a RoBERTa model to achieve 90% of the full parameter performance levels on MRPC. Furthermore, we empirically show that pre-training implicitly minimizes intrinsic dimension and, perhaps surprisingly, larger models tend to have lower intrinsic dimension after a fixed number of pre-training updates, at least in part explaining their extreme effectiveness. Lastly, we connect intrinsic dimensionality with low dimensional task representations and compression based generalization bounds to provide intrinsic-dimension-based generalization bounds that are independent of the full parameter count.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/24/2020

How fine can fine-tuning be? Learning efficient language models

State-of-the-art performance on language understanding tasks is now achi...
research
04/24/2018

Measuring the Intrinsic Dimension of Objective Landscapes

Many recently trained neural networks employ large numbers of parameters...
research
11/12/2020

Bi-tuning of Pre-trained Representations

It is common within the deep learning community to first pre-train a dee...
research
11/28/2022

On the Effectiveness of Parameter-Efficient Fine-Tuning

Fine-tuning pre-trained models has been ubiquitously proven to be effect...
research
04/29/2020

Pre-training Is (Almost) All You Need: An Application to Commonsense Reasoning

Fine-tuning of pre-trained transformer models has become the standard ap...
research
10/15/2021

Exploring Low-dimensional Intrinsic Task Subspace via Prompt Tuning

How can pre-trained language models (PLMs) learn universal representatio...
research
02/16/2023

Exploring the Representation Manifolds of Stable Diffusion Through the Lens of Intrinsic Dimension

Prompting has become an important mechanism by which users can more effe...

Please sign up or login with your details

Forgot password? Click here to reset