Towards Understanding Mixture of Experts in Deep Learning

08/04/2022
by   Zixiang Chen, et al.
27

The Mixture-of-Experts (MoE) layer, a sparsely-activated model controlled by a router, has achieved great success in deep learning. However, the understanding of such architecture remains elusive. In this paper, we formally study how the MoE layer improves the performance of neural network learning and why the mixture model will not collapse into a single model. Our empirical results suggest that the cluster structure of the underlying problem and the non-linearity of the expert are pivotal to the success of MoE. To further understand this, we consider a challenging classification problem with intrinsic cluster structures, which is hard to learn using a single expert. Yet with the MoE layer, by choosing the experts as two-layer nonlinear convolutional neural networks (CNNs), we show that the problem can be learned successfully. Furthermore, our theory shows that the router can learn the cluster-center features, which helps divide the input complex problem into simpler linear classification sub-problems that individual experts can conquer. To our knowledge, this is the first result towards formally understanding the mechanism of the MoE layer for deep learning.

READ FULL TEXT

page 3

page 6

page 18

page 19

page 20

research
12/16/2013

Learning Factored Representations in a Deep Mixture of Experts

Mixtures of Experts combine the outputs of several "expert" networks, ea...
research
06/07/2023

Patch-level Routing in Mixture-of-Experts is Provably Sample-efficient for Convolutional Neural Networks

In deep learning, mixture-of-experts (MoE) activates one or few experts ...
research
05/25/2021

Mixture of ELM based experts with trainable gating network

Mixture of experts method is a neural network based ensemble learning th...
research
04/22/2022

Balancing Expert Utilization in Mixture-of-Experts Layers Embedded in CNNs

This work addresses the problem of unbalanced expert utilization in spar...
research
02/28/2023

Improving Expert Specialization in Mixture of Experts

Mixture of experts (MoE), introduced over 20 years ago, is the simplest ...
research
06/24/2017

Deep Mixture of Diverse Experts for Large-Scale Visual Recognition

In this paper, a deep mixture of diverse experts algorithm is developed ...
research
02/21/2018

Globally Consistent Algorithms for Mixture of Experts

Mixture-of-Experts (MoE) is a widely popular neural network architecture...

Please sign up or login with your details

Forgot password? Click here to reset