Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling

04/03/2023
by   Stella Biderman, et al.
0

How do large language models (LLMs) develop and evolve over the course of training? How do these patterns change as models scale? To answer these questions, we introduce Pythia, a suite of 16 LLMs all trained on public data seen in the exact same order and ranging in size from 70M to 12B parameters. We provide public access to 154 checkpoints for each one of the 16 models, alongside tools to download and reconstruct their exact training dataloaders for further study. We intend Pythia to facilitate research in many areas, and we present several case studies including novel results in memorization, term frequency effects on few-shot performance, and reducing gender bias. We demonstrate that this highly controlled setup can be used to yield novel insights toward LLMs and their training dynamics. Trained models, analysis code, training code, and training data can be found at https://github.com/EleutherAI/pythia.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/02/2023

OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models

We introduce OpenFlamingo, a family of autoregressive vision-language mo...
research
03/31/2022

Scaling Up Models and Data with and

Recent neural network-based language models have benefited greatly from ...
research
06/06/2023

MISGENDERED: Limits of Large Language Models in Understanding Pronouns

Content Warning: This paper contains examples of misgendering and erasur...
research
06/13/2023

Large Language Models Sometimes Generate Purely Negatively-Reinforced Text

When using adversarial training, it is common practice to train against ...
research
04/21/2023

Emergent and Predictable Memorization in Large Language Models

Memorization, or the tendency of large language models (LLMs) to output ...
research
05/22/2022

Memorization Without Overfitting: Analyzing the Training Dynamics of Large Language Models

Despite their wide adoption, the underlying training and memorization dy...
research
05/24/2023

Emergent inabilities? Inverse scaling over the course of pretraining

Does inverse scaling only occur as a function of model parameter size, o...

Please sign up or login with your details

Forgot password? Click here to reset