CvT: Introducing Convolutions to Vision Transformers

03/29/2021
by   Haiping Wu, et al.
30

We present in this paper a new architecture, named Convolutional vision Transformer (CvT), that improves Vision Transformer (ViT) in performance and efficiency by introducing convolutions into ViT to yield the best of both designs. This is accomplished through two primary modifications: a hierarchy of Transformers containing a new convolutional token embedding, and a convolutional Transformer block leveraging a convolutional projection. These changes introduce desirable properties of convolutional neural networks (CNNs) to the ViT architecture (shift, scale, and distortion invariance) while maintaining the merits of Transformers (dynamic attention, global context, and better generalization). We validate CvT by conducting extensive experiments, showing that this approach achieves state-of-the-art performance over other Vision Transformers and ResNets on ImageNet-1k, with fewer parameters and lower FLOPs. In addition, performance gains are maintained when pretrained on larger datasets (ImageNet-22k) and fine-tuned to downstream tasks. Pre-trained on ImageNet-22k, our CvT-W24 obtains a top-1 accuracy of 87.7% on the ImageNet-1k val set. Finally, our results show that the positional encoding, a crucial component in existing Vision Transformers, can be safely removed in our model, simplifying the design for higher resolution vision tasks. Code will be released at <https://github.com/leoxiaobin/CvT>.

READ FULL TEXT
research
04/22/2021

Token Labeling: Training a 85.4 56M Parameters on ImageNet

This paper provides a strong baseline for vision transformers on the Ima...
research
10/13/2022

How to Train Vision Transformer on Small-scale Datasets?

Vision Transformer (ViT), a radically different architecture than convol...
research
05/28/2022

WaveMix-Lite: A Resource-efficient Neural Network for Image Analysis

Gains in the ability to generalize on image analysis tasks for neural ne...
research
07/07/2022

More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity

Transformers have quickly shined in the computer vision world since the ...
research
07/25/2022

Self-Distilled Vision Transformer for Domain Generalization

In recent past, several domain generalization (DG) methods have been pro...
research
07/27/2022

Convolutional Embedding Makes Hierarchical Vision Transformer Stronger

Vision Transformers (ViTs) have recently dominated a range of computer v...
research
09/15/2022

Can We Solve 3D Vision Tasks Starting from A 2D Vision Transformer?

Vision Transformers (ViTs) have proven to be effective, in solving 2D im...

Please sign up or login with your details

Forgot password? Click here to reset