Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity

01/11/2021
by   William Fedus, et al.
0

In deep learning, models typically reuse the same parameters for all inputs. Mixture of Experts (MoE) defies this and instead selects different parameters for each incoming example. The result is a sparsely-activated model – with outrageous numbers of parameters – but a constant computational cost. However, despite several notable successes of MoE, widespread adoption has been hindered by complexity, communication costs and training instability – we address these with the Switch Transformer. We simplify the MoE routing algorithm and design intuitive improved models with reduced communication and computational costs. Our proposed training techniques help wrangle the instabilities and we show large sparse models may be trained, for the first time, with lower precision (bfloat16) formats. We design models based off T5-Base and T5-Large to obtain up to 7x increases in pre-training speed with the same computational resources. These improvements extend into multilingual settings where we measure gains over the mT5-Base version across all 101 languages. Finally, we advance the current scale of language models by pre-training up to trillion parameter models on the "Colossal Clean Crawled Corpus" and achieve a 4x speedup over the T5-XXL model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/10/2022

SMILE: Scaling Mixture-of-Experts with Efficient Bi-level Routing

The mixture of Expert (MoE) parallelism is a recent advancement that sca...
research
12/09/2022

Sparse Upcycling: Training Mixture-of-Experts from Dense Checkpoints

Training large, deep neural networks to convergence can be prohibitively...
research
10/08/2021

Taming Sparsely Activated Transformer with Stochastic Experts

Sparsely activated models (SAMs), such as Mixture-of-Experts (MoE), can ...
research
10/14/2021

Towards More Effective and Economic Sparsely-Activated Model

The sparsely-activated models have achieved great success in natural lan...
research
07/14/2022

BERTIN: Efficient Pre-Training of a Spanish Language Model using Perplexity Sampling

The pre-training of large language models usually requires massive amoun...
research
09/04/2022

A Review of Sparse Expert Models in Deep Learning

Sparse expert models are a thirty-year old concept re-emerging as a popu...
research
08/13/2021

Towards Structured Dynamic Sparse Pre-Training of BERT

Identifying algorithms for computational efficient unsupervised training...

Please sign up or login with your details

Forgot password? Click here to reset