Improving Expert Specialization in Mixture of Experts

02/28/2023
by   Yamuna Krishnamurthy, et al.
0

Mixture of experts (MoE), introduced over 20 years ago, is the simplest gated modular neural network architecture. There is renewed interest in MoE because the conditional computation allows only parts of the network to be used during each inference, as was recently demonstrated in large scale natural language processing models. MoE is also of potential interest for continual learning, as experts may be reused for new tasks, and new experts introduced. The gate in the MoE architecture learns task decompositions and individual experts learn simpler functions appropriate to the gate's decomposition. In this paper: (1) we show that the original MoE architecture and its training method do not guarantee intuitive task decompositions and good expert utilization, indeed they can fail spectacularly even for simple data such as MNIST and FashionMNIST; (2) we introduce a novel gating architecture, similar to attention, that improves performance and results in a lower entropy task decomposition; and (3) we introduce a novel data-driven regularization that improves expert specialization. We empirically validate our methods on MNIST, FashionMNIST and CIFAR-100 datasets.

READ FULL TEXT

page 4

page 5

research
08/31/2020

Anomaly Detection by Recombining Gated Unsupervised Experts

Inspired by mixture-of-experts models and the analysis of the hidden act...
research
12/29/2021

Dense-to-Sparse Gate for Mixture-of-Experts

Mixture-of-experts (MoE) is becoming popular due to its success in impro...
research
02/21/2018

Globally Consistent Algorithms for Mixture of Experts

Mixture-of-Experts (MoE) is a widely popular neural network architecture...
research
12/11/2014

Compact Compositional Models

Learning compact and interpretable representations is a very natural tas...
research
06/05/2023

COMET: Learning Cardinality Constrained Mixture of Experts with Trees and Local Search

The sparse Mixture-of-Experts (Sparse-MoE) framework efficiently scales ...
research
11/18/2016

Expert Gate: Lifelong Learning with a Network of Experts

In this paper we introduce a model of lifelong learning, based on a Netw...
research
08/04/2022

Towards Understanding Mixture of Experts in Deep Learning

The Mixture-of-Experts (MoE) layer, a sparsely-activated model controlle...

Please sign up or login with your details

Forgot password? Click here to reset