What Makes for Good Tokenizers in Vision Transformer?

12/21/2022
by   Shengju Qian, et al.
10

The architecture of transformers, which recently witness booming applications in vision tasks, has pivoted against the widespread convolutional paradigm. Relying on the tokenization process that splits inputs into multiple tokens, transformers are capable of extracting their pairwise relationships using self-attention. While being the stemming building block of transformers, what makes for a good tokenizer has not been well understood in computer vision. In this work, we investigate this uncharted problem from an information trade-off perspective. In addition to unifying and understanding existing structural modifications, our derivation leads to better design strategies for vision tokenizers. The proposed Modulation across Tokens (MoTo) incorporates inter-token modeling capability through normalization. Furthermore, a regularization objective TokenProp is embraced in the standard training regime. Through extensive experiments on various transformer architectures, we observe both improved performance and intriguing properties of these two plug-and-play designs with negligible computational overhead. These observations further indicate the importance of the commonly-omitted designs of tokenizers in vision transformer.

READ FULL TEXT

page 3

page 6

research
12/05/2021

Dynamic Token Normalization Improves Vision Transformer

Vision Transformer (ViT) and its variants (e.g., Swin, PVT) have achieve...
research
04/10/2023

ViT-Calibrator: Decision Stream Calibration for Vision Transformer

A surge of interest has emerged in utilizing Transformers in diverse vis...
research
04/19/2022

Multimodal Token Fusion for Vision Transformers

Many adaptations of transformers have emerged to address the single-moda...
research
02/24/2022

Learning to Merge Tokens in Vision Transformers

Transformers are widely applied to solve natural language understanding ...
research
06/30/2021

Augmented Shortcuts for Vision Transformers

Transformer models have achieved great progress on computer vision tasks...
research
06/01/2023

White-Box Transformers via Sparse Rate Reduction

In this paper, we contend that the objective of representation learning ...
research
02/08/2023

Mitigating Bias in Visual Transformers via Targeted Alignment

As transformer architectures become increasingly prevalent in computer v...

Please sign up or login with your details

Forgot password? Click here to reset