Towards MoE Deployment: Mitigating Inefficiencies in Mixture-of-Expert (MoE) Inference

03/10/2023
by   Haiyang Huang, et al.
0

Mixture-of-Experts (MoE) models have gained popularity in achieving state-of-the-art performance in a wide range of tasks in computer vision and natural language processing. They effectively expand the model capacity while incurring a minimal increase in computation cost during training. However, deploying such models for inference is difficult due to their large size and complex communication pattern. In this work, we provide a characterization of two MoE workloads, namely Language Modeling (LM) and Machine Translation (MT) and identify their sources of inefficiencies at deployment. We propose three optimization techniques to mitigate sources of inefficiencies, namely (1) Dynamic gating, (2) Expert Buffering, and (3) Expert load balancing. We show that dynamic gating improves maximum throughput by 6.21-11.23× for LM, 5.75-10.98× for MT Encoder and 2.58-5.71× for MT Decoder. It also reduces memory usage by up to 1.36× for LM and up to 1.1× for MT. We further propose Expert Buffering, a new caching mechanism that only keeps hot, active experts in GPU memory while buffering the rest in CPU memory. This reduces static memory allocation by up to 1.47×. We finally propose a load balancing methodology that provides additional scalability to the workload.

READ FULL TEXT

page 1

page 4

page 8

research
05/20/2022

SE-MoE: A Scalable and Efficient Mixture-of-Experts Distributed Training and Inference System

With the increasing diversity of ML infrastructures nowadays, distribute...
research
05/02/2023

New Trends in Machine Translation using Large Language Models: Case Examples with ChatGPT

Machine Translation (MT) has made significant progress in recent years u...
research
08/29/2023

Serving MoE Models on Resource-constrained Edge Devices via Dynamic Expert Swapping

Mixture of experts (MoE) is a popular technique in deep learning that im...
research
08/23/2023

Pre-gated MoE: An Algorithm-System Co-Design for Fast and Scalable Mixture-of-Expert Inference

Large language models (LLMs) based on transformers have made significant...
research
10/15/2021

Why don't people use character-level machine translation?

We present a literature and empirical survey that critically assesses th...
research
08/28/2023

EdgeMoE: Fast On-Device Inference of MoE-based Large Language Models

Large Language Models (LLMs) such as GPTs and LLaMa have ushered in a re...
research
11/18/2022

Who Says Elephants Can't Run: Bringing Large Scale MoE Models into Cloud Scale Production

Mixture of Experts (MoE) models with conditional execution of sparsely a...

Please sign up or login with your details

Forgot password? Click here to reset