Scaling Laws for Transfer

02/02/2021
by   Danny Hernandez, et al.
0

We study empirical scaling laws for transfer learning between distributions in an unsupervised, fine-tuning setting. When we train increasingly large neural networks from-scratch on a fixed-size dataset, they eventually become data-limited and stop improving in performance (cross-entropy loss). When we do the same for models pre-trained on a large language dataset, the slope in performance gains is merely reduced rather than going to zero. We calculate the effective data "transferred" from pre-training by determining how much data a transformer of the same size would have required to achieve the same loss when training from scratch. In other words, we focus on units of data while holding everything else fixed. We find that the effective data transferred is described well in the low data regime by a power-law of parameter count and fine-tuning dataset size. We believe the exponents in these power-laws correspond to measures of the generality of a model and proximity of distributions (in a directed rather than symmetric sense). We find that pre-training effectively multiplies the fine-tuning dataset size. Transfer, like overall performance, scales predictably in terms of parameters, data, and compute.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/27/2023

The Role of Pre-training Data in Transfer Learning

The transfer learning paradigm of model pre-training and subsequent fine...
research
12/15/2021

Applying SoftTriple Loss for Supervised Language Model Fine Tuning

We introduce a new loss function TripleEntropy, to improve classificatio...
research
01/23/2020

Scaling Laws for Neural Language Models

We study empirical scaling laws for language model performance on the cr...
research
05/26/2023

Honey, I Shrunk the Language: Language Model Behavior at Reduced Scale

In recent years, language models have drastically grown in size, and the...
research
05/21/2011

Is the Multiverse Hypothesis capable of explaining the Fine Tuning of Nature Laws and Constants? The Case of Cellular Automata

The objective of this paper is analyzing to which extent the multiverse ...
research
10/13/2021

Scaling Laws for the Few-Shot Adaptation of Pre-trained Image Classifiers

Empirical science of neural scaling laws is a rapidly growing area of si...
research
06/30/2021

The Evolution of Out-of-Distribution Robustness Throughout Fine-Tuning

Although machine learning models typically experience a drop in performa...

Please sign up or login with your details

Forgot password? Click here to reset