Hash Layers For Large Sparse Models

06/08/2021
by   Stephen Roller, et al.
0

We investigate the training of sparse layers that use different parameters for different inputs based on hashing in large Transformer models. Specifically, we modify the feedforward layer to hash to different sets of weights depending on the current token, over all tokens in the sequence. We show that this procedure either outperforms or is competitive with learning-to-route mixture-of-expert methods such as Switch Transformers and BASE Layers, while requiring no routing parameters or extra terms in the objective function such as a load balancing loss, and no sophisticated assignment algorithm. We study the performance of different hashing techniques, hash sizes and input features, and show that balanced and random hashes focused on the most local features work best, compared to either learning clusters or using longer-range context. We show our approach works well both on large language modeling and dialogue tasks, and on downstream fine-tuning tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/14/2022

Efficient Language Modeling with Sparse all-MLP

All-MLP architectures have attracted increasing interest as an alternati...
research
03/30/2021

BASE Layers: Simplifying Training of Large, Sparse Models

We introduce a new balanced assignment of experts (BASE) layer for large...
research
04/06/2017

Online Hashing

Although hash function learning algorithms have achieved great success i...
research
02/11/2020

Superbloom: Bloom filter meets Transformer

We extend the idea of word pieces in natural language models to machine ...
research
10/18/2019

DLB: Deep Learning Based Load Balancing

Load balancing mechanisms have been widely adopted by distributed platfo...
research
05/04/2023

A Sparse Johnson-Lindenstrauss Transform using Fast Hashing

The Sparse Johnson-Lindenstrauss Transform of Kane and Nelson (SODA 2012...
research
01/31/2023

The Power of External Memory in Increasing Predictive Model Capacity

One way of introducing sparsity into deep networks is by attaching an ex...

Please sign up or login with your details

Forgot password? Click here to reset