A Theoretical View on Sparsely Activated Networks

08/08/2022
by   Cenk Baykal, et al.
0

Deep and wide neural networks successfully fit very complex functions today, but dense models are starting to be prohibitively expensive for inference. To mitigate this, one promising direction is networks that activate a sparse subgraph of the network. The subgraph is chosen by a data-dependent routing function, enforcing a fixed mapping of inputs to subnetworks (e.g., the Mixture of Experts (MoE) paradigm in Switch Transformers). However, prior work is largely empirical, and while existing routing functions work well in practice, they do not lead to theoretical guarantees on approximation ability. We aim to provide a theoretical explanation for the power of sparse networks. As our first contribution, we present a formal model of data-dependent sparse networks that captures salient aspects of popular architectures. We then introduce a routing function based on locality sensitive hashing (LSH) that enables us to reason about how well sparse networks approximate target functions. After representing LSH-based sparse networks with our model, we prove that sparse networks can match the approximation power of dense networks on Lipschitz functions. Applying LSH on the input vectors means that the experts interpolate the target function in different subregions of the input space. To support our theory, we define various datasets based on Lipschitz target functions, and we show that sparse networks give a favorable trade-off between number of active units and approximation quality.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/29/2021

Dense-to-Sparse Gate for Mixture-of-Experts

Mixture-of-experts (MoE) is becoming popular due to its success in impro...
research
10/19/2022

On the Adversarial Robustness of Mixture of Experts

Adversarial robustness is a key desirable property of neural networks. I...
research
11/02/2021

Lipschitz widths

This paper introduces a measure, called Lipschitz widths, of the optimal...
research
02/11/2016

A Universal Approximation Theorem for Mixture of Experts Models

The mixture of experts (MoE) model is a popular neural network architect...
research
12/30/2021

JacNet: Learning Functions with Structured Jacobians

Neural networks are trained to learn an approximate mapping from an inpu...
research
02/02/2021

Keep the Gradients Flowing: Using Gradient Flow to Study Sparse Network Optimization

Training sparse networks to converge to the same performance as dense ne...

Please sign up or login with your details

Forgot password? Click here to reset