Lightweight Vision Transformer with Bidirectional Interaction

06/01/2023
by   Qihang Fan, et al.
0

Recent advancements in vision backbones have significantly improved their performance by simultaneously modeling images' local and global contexts. However, the bidirectional interaction between these two contexts has not been well explored and exploited, which is important in the human visual system. This paper proposes a Fully Adaptive Self-Attention (FASA) mechanism for vision transformer to model the local and global information as well as the bidirectional interaction between them in context-aware ways. Specifically, FASA employs self-modulated convolutions to adaptively extract local representation while utilizing self-attention in down-sampled space to extract global representation. Subsequently, it conducts a bidirectional adaptation process between local and global representation to model their interaction. In addition, we introduce a fine-grained downsampling strategy to enhance the down-sampled self-attention mechanism for finer-grained global perception capability. Based on FASA, we develop a family of lightweight vision backbones, Fully Adaptive Transformer (FAT) family. Extensive experiments on multiple vision tasks demonstrate that FAT achieves impressive performance. Notably, FAT accomplishes a 77.6 0.7G FLOPs, which surpasses the most advanced ConvNets and Transformers with similar model size and computational costs. Moreover, our model exhibits faster speed on modern GPU compared to other models. Code will be available at https://github.com/qhfan/FAT.

READ FULL TEXT

page 2

page 4

page 9

research
09/19/2022

Axially Expanded Windows for Local-Global Interaction in Vision Transformers

Recently, Transformers have shown promising performance in various visio...
research
11/27/2022

Semantic-Aware Local-Global Vision Transformer

Vision Transformers have achieved remarkable progresses, among which Swi...
research
07/20/2022

Locality Guidance for Improving Vision Transformers on Tiny Datasets

While the Vision Transformer (VT) architecture is becoming trendy in com...
research
03/18/2022

Local-Global Context Aware Transformer for Language-Guided Video Segmentation

We explore the task of language-guided video segmentation (LVS). Previou...
research
06/09/2023

FasterViT: Fast Vision Transformers with Hierarchical Attention

We design a new family of hybrid CNN-ViT neural networks, named FasterVi...
research
05/02/2023

AxWin Transformer: A Context-Aware Vision Transformer Backbone with Axial Windows

Recently Transformer has shown good performance in several vision tasks ...
research
03/31/2023

Rethinking Local Perception in Lightweight Vision Transformer

Vision Transformers (ViTs) have been shown to be effective in various vi...

Please sign up or login with your details

Forgot password? Click here to reset