LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference

04/02/2021
by   Ben Graham, et al.
0

We design a family of image classification architectures that optimize the trade-off between accuracy and efficiency in a high-speed regime. Our work exploits recent findings in attention-based architectures, which are competitive on highly parallel processing hardware. We re-evaluated principles from the extensive literature on convolutional neural networks to apply them to transformers, in particular activation maps with decreasing resolutions. We also introduce the attention bias, a new way to integrate positional information in vision transformers. As a result, we propose LeVIT: a hybrid neural network for fast inference image classification. We consider different measures of efficiency on different hardware platforms, so as to best reflect a wide range of application scenarios. Our extensive experiments empirically validate our technical choices and show they are suitable to most architectures. Overall, LeViT significantly outperforms existing convnets and vision transformers with respect to the speed/accuracy tradeoff. For example, at 80% ImageNet top-1 accuracy, LeViT is 3.3 times faster than EfficientNet on the CPU.

READ FULL TEXT

page 3

page 18

research
03/31/2021

Going deeper with Image Transformers

Transformers have been recently adapted for large scale image classifica...
research
05/11/2023

EfficientViT: Memory Efficient Vision Transformer with Cascaded Group Attention

Vision transformers have shown great success due to their high model cap...
research
09/15/2022

On the Surprising Effectiveness of Transformers in Low-Labeled Video Recognition

Recently vision transformers have been shown to be competitive with conv...
research
03/26/2021

Understanding Robustness of Transformers for Image Classification

Deep Convolutional Neural Networks (CNNs) have long been the architectur...
research
09/16/2022

Quantum Vision Transformers

We design and analyse quantum transformers, extending the state-of-the-a...
research
03/23/2018

SqueezeNext: Hardware-Aware Neural Network Design

One of the main barriers for deploying neural networks on embedded syste...
research
06/21/2023

Training Transformers with 4-bit Integers

Quantizing the activation, weight, and gradient to 4-bit is promising to...

Please sign up or login with your details

Forgot password? Click here to reset