Convolutional Embedding Makes Hierarchical Vision Transformer Stronger

07/27/2022
by   Cong Wang, et al.
0

Vision Transformers (ViTs) have recently dominated a range of computer vision tasks, yet it suffers from low training data efficiency and inferior local semantic representation capability without appropriate inductive bias. Convolutional neural networks (CNNs) inherently capture regional-aware semantics, inspiring researchers to introduce CNNs back into the architecture of the ViTs to provide desirable inductive bias for ViTs. However, is the locality achieved by the micro-level CNNs embedded in ViTs good enough? In this paper, we investigate the problem by profoundly exploring how the macro architecture of the hybrid CNNs/ViTs enhances the performances of hierarchical ViTs. Particularly, we study the role of token embedding layers, alias convolutional embedding (CE), and systemically reveal how CE injects desirable inductive bias in ViTs. Besides, we apply the optimal CE configuration to 4 recently released state-of-the-art ViTs, effectively boosting the corresponding performances. Finally, a family of efficient hybrid CNNs/ViTs, dubbed CETNets, are released, which may serve as generic vision backbones. Specifically, CETNets achieve 84.9 48.6 substantially improving the performances of the corresponding state-of-the-art baselines.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/26/2022

Training Vision Transformers with Only 2040 Images

Vision Transformers (ViTs) is emerging as an alternative to convolutiona...
research
10/28/2022

Introducing topography in convolutional neural networks

Parts of the brain that carry sensory tasks are organized topographicall...
research
03/29/2021

CvT: Introducing Convolutions to Vision Transformers

We present in this paper a new architecture, named Convolutional vision ...
research
06/01/2022

A comparative study between vision transformers and CNNs in digital pathology

Recently, vision transformers were shown to be capable of outperforming ...
research
12/12/2022

Masked autoencoders are effective solution to transformer data-hungry

Vision Transformers (ViTs) outperforms convolutional neural networks (CN...
research
07/22/2022

An Impartial Take to the CNN vs Transformer Robustness Contest

Following the surge of popularity of Transformers in Computer Vision, se...
research
08/18/2022

The 8-Point Algorithm as an Inductive Bias for Relative Pose Prediction by ViTs

We present a simple baseline for directly estimating the relative pose (...

Please sign up or login with your details

Forgot password? Click here to reset