Exploring Sparse Expert Models and Beyond

05/31/2021
by   An Yang, et al.
10

Mixture-of-Experts (MoE) models can achieve promising results with outrageous large amount of parameters but constant computation cost, and thus it has become a trend in model scaling. Still it is a mystery how MoE layers bring quality gains by leveraging the parameters with sparse activation. In this work, we investigate several key factors in sparse expert models. We observe that load imbalance may not be a significant problem affecting model quality, contrary to the perspectives of recent studies, while the number of sparsely activated experts k and expert capacity C in top-k routing can significantly make a difference in this context. Furthermore, we take a step forward to propose a simple method called expert prototyping that splits experts into different prototypes and applies k top-1 routing. This strategy improves the model quality but maintains constant computational costs, and our further exploration on extremely large-scale models reflects that it is more effective in training larger models. We push the model scale to over 1 trillion parameters and implement it on solely 480 NVIDIA V100-32GB GPUs, in comparison with the recent SOTAs on 2048 TPU cores. The proposed giant model achieves substantial speedup in convergence over the same-size baseline.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/18/2022

Mixture-of-Experts with Expert Choice Routing

Sparsely-activated Mixture-of-experts (MoE) models allow the number of p...
research
10/14/2021

Towards More Effective and Economic Sparsely-Activated Model

The sparsely-activated models have achieved great success in natural lan...
research
04/20/2022

On the Representation Collapse of Sparse Mixture of Experts

Sparse mixture of experts provides larger model capacity while requiring...
research
07/19/2022

MoEC: Mixture of Expert Clusters

Sparsely Mixture of Experts (MoE) has received great interest due to its...
research
12/10/2022

SMILE: Scaling Mixture-of-Experts with Efficient Bi-level Routing

The mixture of Expert (MoE) parallelism is a recent advancement that sca...
research
10/19/2022

On the Adversarial Robustness of Mixture of Experts

Adversarial robustness is a key desirable property of neural networks. I...
research
06/06/2023

Soft Merging of Experts with Adaptive Routing

Sparsely activated neural networks with conditional computation learn to...

Please sign up or login with your details

Forgot password? Click here to reset