Vision Pair Learning: An Efficient Training Framework for Image Classification

12/02/2021
by   Bei Tong, et al.
0

Transformer is a potentially powerful architecture for vision tasks. Although equipped with more parameters and attention mechanism, its performance is not as dominant as CNN currently. CNN is usually computationally cheaper and still the leading competitor in various vision tasks. One research direction is to adopt the successful ideas of CNN and improve transformer, but it often relies on elaborated and heuristic network design. Observing that transformer and CNN are complementary in representation learning and convergence speed, we propose an efficient training framework called Vision Pair Learning (VPL) for image classification task. VPL builds up a network composed of a transformer branch, a CNN branch and pair learning module. With multi-stage training strategy, VPL enables the branches to learn from their partners during the appropriate stage of the training process, and makes them both achieve better performance with less time cost. Without external data, VPL promotes the top-1 accuracy of ViT-Base and ResNet-50 on the ImageNet-1k validation set to 83.47 respectively. Experiments on other datasets of various domains prove the efficacy of VPL and suggest that transformer performs better when paired with the differently structured CNN in VPL. we also analyze the importance of components through ablation study.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/27/2021

CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification

The recently developed vision transformer (ViT) has achieved promising r...
research
07/18/2022

Multi-manifold Attention for Vision Transformers

Vision Transformer are very popular nowadays due to their state-of-the-a...
research
08/24/2022

gSwin: Gated MLP Vision Model with Hierarchical Structure of Shifted Window

Following the success in language domain, the self-attention mechanism (...
research
04/04/2023

Multi-Class Explainable Unlearning for Image Classification via Weight Filtering

Machine Unlearning has recently been emerging as a paradigm for selectiv...
research
07/25/2022

Jigsaw-ViT: Learning Jigsaw Puzzles in Vision Transformer

The success of Vision Transformer (ViT) in various computer vision tasks...
research
09/16/2022

A Mosquito is Worth 16x16 Larvae: Evaluation of Deep Learning Architectures for Mosquito Larvae Classification

Mosquito-borne diseases (MBDs), such as dengue virus, chikungunya virus,...
research
06/12/2021

Dynamic Clone Transformer for Efficient Convolutional Neural Netwoks

Convolutional networks (ConvNets) have shown impressive capability to so...

Please sign up or login with your details

Forgot password? Click here to reset