Bridging the Gap Between Vision Transformers and Convolutional Neural Networks on Small Datasets

10/12/2022
by   Zhiying Lu, et al.
0

There still remains an extreme performance gap between Vision Transformers (ViTs) and Convolutional Neural Networks (CNNs) when training from scratch on small datasets, which is concluded to the lack of inductive bias. In this paper, we further consider this problem and point out two weaknesses of ViTs in inductive biases, that is, the spatial relevance and diverse channel representation. First, on spatial aspect, objects are locally compact and relevant, thus fine-grained feature needs to be extracted from a token and its neighbors. While the lack of data hinders ViTs to attend the spatial relevance. Second, on channel aspect, representation exhibits diversity on different channels. But the scarce data can not enable ViTs to learn strong enough representation for accurate recognition. To this end, we propose Dynamic Hybrid Vision Transformer (DHVT) as the solution to enhance the two inductive biases. On spatial aspect, we adopt a hybrid structure, in which convolution is integrated into patch embedding and multi-layer perceptron module, forcing the model to capture the token features as well as their neighboring features. On channel aspect, we introduce a dynamic feature aggregation module in MLP and a brand new "head token" design in multi-head self-attention module to help re-calibrate channel representation and make different channel group representation interacts with each other. The fusion of weak channel representation forms a strong enough representation for classification. With this design, we successfully eliminate the performance gap between CNNs and ViTs, and our DHVT achieves a series of state-of-the-art performance with a lightweight model, 85.68 ImageNet-1K with 24.0M parameters. Code is available at https://github.com/ArieSeirack/DHVT.

READ FULL TEXT

page 10

page 22

page 23

page 27

research
04/07/2022

DaViT: Dual Attention Vision Transformers

In this work, we introduce Dual Attention Vision Transformers (DaViT), a...
research
04/13/2023

Remote Sensing Change Detection With Transformers Trained from Scratch

Current transformer-based change detection (CD) approaches either employ...
research
07/12/2022

LightViT: Towards Light-Weight Convolution-Free Vision Transformers

Vision transformers (ViTs) are usually considered to be less light-weigh...
research
03/14/2022

EIT: Efficiently Lead Inductive Biases to ViT

Vision Transformer (ViT) depends on properties similar to the inductive ...
research
05/16/2023

CB-HVTNet: A channel-boosted hybrid vision transformer network for lymphocyte assessment in histopathological images

Transformers, due to their ability to learn long range dependencies, hav...
research
11/25/2022

Adaptive Attention Link-based Regularization for Vision Transformers

Although transformer networks are recently employed in various vision ta...
research
11/10/2022

Demystify Transformers Convolutions in Modern Image Deep Networks

Recent success of vision transformers has inspired a series of vision ba...

Please sign up or login with your details

Forgot password? Click here to reset