Optimizing Mixture of Experts using Dynamic Recompilations

05/04/2022
by   Ferdinand Kossmann, et al.
0

The Mixture of Experts architecture allows for outrageously large neural networks by scaling model parameter size independently from computational demand (FLOPs). However, current DNN frameworks cannot effectively support the dynamic data flow in Mixture of Experts, and implementations on top of these frameworks need to use workarounds that introduce significant overheads. To address the limitation of these frameworks, we present DynaMoE, a DNN library that uses dynamic recompilations to optimize and adapt the use of computational resources to the dynamic needs of Mixture of Experts models. Our evaluation shows that DynaMoE achieves a 1.8x speedup and supports 2.3x larger model sizes when compared to existing MoE systems, even when not using recompilations. We then present further optimizations enabled by dynamic recompilations that yield an additional 1.7x speedup while simultaneously reducing memory pressure and improving model quality.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/11/2023

A Hybrid Tensor-Expert-Data Parallelism Approach to Optimize Mixture-of-Experts Training

Mixture-of-Experts (MoE) is a neural network architecture that adds spar...
research
08/23/2023

Pre-gated MoE: An Algorithm-System Co-Design for Fast and Scalable Mixture-of-Expert Inference

Large language models (LLMs) based on transformers have made significant...
research
06/05/2018

Deep Mixture of Experts via Shallow Embedding

Larger networks generally have greater representational power at the cos...
research
11/29/2022

MegaBlocks: Efficient Sparse Training with Mixture-of-Experts

We present MegaBlocks, a system for efficient Mixture-of-Experts (MoE) t...
research
01/14/2022

DeepSpeed-MoE: Advancing Mixture-of-Experts Inference and Training to Power Next-Generation AI Scale

As the training of giant dense models hits the boundary on the availabil...
research
02/15/2022

A Survey on Dynamic Neural Networks for Natural Language Processing

Effectively scaling large Transformer models is a main driver of recent ...
research
11/01/2022

Contextual Mixture of Experts: Integrating Knowledge into Predictive Modeling

This work proposes a new data-driven model devised to integrate process ...

Please sign up or login with your details

Forgot password? Click here to reset