Tensor Programs V: Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer

03/07/2022
by   Greg Yang, et al.
2

Hyperparameter (HP) tuning in deep learning is an expensive process, prohibitively so for neural networks (NNs) with billions of parameters. We show that, in the recently discovered Maximal Update Parametrization (muP), many optimal HPs remain stable even as model size changes. This leads to a new HP tuning paradigm we call muTransfer: parametrize the target model in muP, tune the HP indirectly on a smaller model, and zero-shot transfer them to the full-sized model, i.e., without directly tuning the latter at all. We verify muTransfer on Transformer and ResNet. For example, 1) by transferring pretraining HPs from a model of 13M parameters, we outperform published numbers of BERT-large (350M parameters), with a total tuning cost equivalent to pretraining BERT-large once; 2) by transferring from 40M parameters, we outperform published numbers of the 6.7B GPT-3 model, with tuning cost only 7 of total pretraining cost. A Pytorch implementation of our technique can be found at github.com/microsoft/mup and installable via `pip install mup`.

READ FULL TEXT

page 38

page 39

research
10/19/2022

Continued Pretraining for Better Zero- and Few-Shot Promptability

Recently introduced language model prompting methods can achieve high ac...
research
04/15/2021

How to Train BERT with an Academic Budget

While large language models à la BERT are used ubiquitously in NLP, pret...
research
07/27/2020

Practical and sample efficient zero-shot HPO

Zero-shot hyperparameter optimization (HPO) is a simple yet effective us...
research
07/20/2022

Pretraining a Neural Network before Knowing Its Architecture

Training large neural networks is possible by training a smaller hyperne...
research
05/21/2023

Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers

This paper explores the effectiveness of model-generated signals in impr...
research
06/16/2023

Neural Priming for Sample-Efficient Adaptation

We propose Neural Priming, a technique for adapting large pretrained mod...
research
10/20/2022

Weighted Maximum Likelihood for Controller Tuning

Recently, Model Predictive Contouring Control (MPCC) has arisen as the s...

Please sign up or login with your details

Forgot password? Click here to reset