Which transformer architecture fits my data? A vocabulary bottleneck in self-attention

05/09/2021
by   Noam Wies, et al.
0

After their successful debut in natural language processing, Transformer architectures are now becoming the de-facto standard in many domains. An obstacle for their deployment over new modalities is the architectural configuration: the optimal depth-to-width ratio has been shown to dramatically vary across data types (e.g., 10x larger over images than over language). We theoretically predict the existence of an embedding rank bottleneck that limits the contribution of self-attention width to the Transformer expressivity. We thus directly tie the input vocabulary size and rank to the optimal depth-to-width ratio, since a small vocabulary size or rank dictates an added advantage of depth over width. We empirically demonstrate the existence of this bottleneck and its implications on the depth-to-width interplay of Transformer architectures, linking the architecture variability across domains to the often glossed-over usage of different vocabulary sizes or embedding ranks in different domains. As an additional benefit, our rank bottlenecking framework allows us to identify size redundancies of 25%-50% in leading NLP models such as ALBERT and T5.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/22/2020

Limits to Depth Efficiencies of Self-Attention

Self-attention architectures, which are rapidly pushing the frontier in ...
research
06/25/2021

Vision Transformer Architecture Search

Recently, transformers have shown great superiority in solving computer ...
research
01/27/2023

On the Connection Between MPNN and Graph Transformer

Graph Transformer (GT) recently has emerged as a new paradigm of graph l...
research
02/17/2020

Low-Rank Bottleneck in Multi-head Attention Models

Attention based Transformer architecture has enabled significant advance...
research
06/30/2023

The Shaped Transformer: Attention Models in the Infinite Depth-and-Width Limit

In deep learning theory, the covariance matrix of the representations se...
research
10/02/2022

Wide Attention Is The Way Forward For Transformers

The Transformer is an extremely powerful and prominent deep learning arc...

Please sign up or login with your details

Forgot password? Click here to reset