Scaling Vision with Sparse Mixture of Experts

06/10/2021
by   Carlos Riquelme, et al.
6

Sparsely-gated Mixture of Experts networks (MoEs) have demonstrated excellent scalability in Natural Language Processing. In Computer Vision, however, almost all performant networks are "dense", that is, every input is processed by every parameter. We present a Vision MoE (V-MoE), a sparse version of the Vision Transformer, that is scalable and competitive with the largest dense networks. When applied to image recognition, V-MoE matches the performance of state-of-the-art networks, while requiring as little as half of the compute at inference time. Further, we propose an extension to the routing algorithm that can prioritize subsets of each input across the entire batch, leading to adaptive per-image compute. This allows V-MoE to trade-off performance and compute smoothly at test-time. Finally, we demonstrate the potential of V-MoE to scale vision models, and train a 15B parameter model that attains 90.35 ImageNet.

READ FULL TEXT

page 6

page 31

page 32

page 33

page 36

page 41

page 42

research
09/08/2023

Mobile V-MoEs: Scaling Down Vision Transformers via Sparse Mixture-of-Experts

Sparse Mixture-of-Experts models (MoEs) have recently gained popularity ...
research
03/13/2023

Scaling Vision-Language Models with Sparse Mixture of Experts

The field of natural language processing (NLP) has made significant stri...
research
10/22/2020

An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale

While the Transformer architecture has become the de-facto standard for ...
research
09/04/2022

A Review of Sparse Expert Models in Deep Learning

Sparse expert models are a thirty-year old concept re-emerging as a popu...
research
08/21/2020

Biased Mixtures Of Experts: Enabling Computer Vision Inference Under Data Transfer Limitations

We propose a novel mixture-of-experts class to optimize computer vision ...
research
02/24/2017

Changing Model Behavior at Test-Time Using Reinforcement Learning

Machine learning models are often used at test-time subject to constrain...
research
06/05/2023

COMET: Learning Cardinality Constrained Mixture of Experts with Trees and Local Search

The sparse Mixture-of-Experts (Sparse-MoE) framework efficiently scales ...

Please sign up or login with your details

Forgot password? Click here to reset