Scaling Laws for Transfer

02/02/2021
by   Danny Hernandez, et al.
0

We study empirical scaling laws for transfer learning between distributions in an unsupervised, fine-tuning setting. When we train increasingly large neural networks from-scratch on a fixed-size dataset, they eventually become data-limited and stop improving in performance (cross-entropy loss). When we do the same for models pre-trained on a large language dataset, the slope in performance gains is merely reduced rather than going to zero. We calculate the effective data "transferred" from pre-training by determining how much data a transformer of the same size would have required to achieve the same loss when training from scratch. In other words, we focus on units of data while holding everything else fixed. We find that the effective data transferred is described well in the low data regime by a power-law of parameter count and fine-tuning dataset size. We believe the exponents in these power-laws correspond to measures of the generality of a model and proximity of distributions (in a directed rather than symmetric sense). We find that pre-training effectively multiplies the fine-tuning dataset size. Transfer, like overall performance, scales predictably in terms of parameters, data, and compute.

READ FULL TEXT

Authors

page 1

page 2

page 3

page 4

12/15/2021

Applying SoftTriple Loss for Supervised Language Model Fine Tuning

We introduce a new loss function TripleEntropy, to improve classificatio...
01/23/2020

Scaling Laws for Neural Language Models

We study empirical scaling laws for language model performance on the cr...
11/11/2019

TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection

We propose TANDA, an effective technique for fine-tuning pre-trained Tra...
10/13/2021

Scaling Laws for the Few-Shot Adaptation of Pre-trained Image Classifiers

Empirical science of neural scaling laws is a rapidly growing area of si...
06/30/2021

The Evolution of Out-of-Distribution Robustness Throughout Fine-Tuning

Although machine learning models typically experience a drop in performa...
10/02/2020

Data Transfer Approaches to Improve Seq-to-Seq Retrosynthesis

Retrosynthesis is a problem to infer reactant compounds to synthesize a ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.