DeepAI
Log In Sign Up

Aggregating Global Features into Local Vision Transformer

01/30/2022
by   Krushi Patel, et al.
2

Local Transformer-based classification models have recently achieved promising results with relatively low computational costs. However, the effect of aggregating spatial global information of local Transformer-based architecture is not clear. This work investigates the outcome of applying a global attention-based module named multi-resolution overlapped attention (MOA) in the local window-based transformer after each stage. The proposed MOA employs slightly larger and overlapped patches in the key to enable neighborhood pixel information transmission, which leads to significant performance gain. In addition, we thoroughly investigate the effect of the dimension of essential architecture components through extensive experiments and discover an optimum architecture design. Extensive experimental results CIFAR-10, CIFAR-100, and ImageNet-1K datasets demonstrate that the proposed approach outperforms previous vision Transformers with a comparatively fewer number of parameters.

READ FULL TEXT

page 1

page 2

page 3

page 4

07/12/2021

GiT: Graph Interactive Transformer for Vehicle Re-identification

Transformers are more and more popular in computer vision, which treat a...
06/21/2022

Vicinity Vision Transformer

Vision transformers have shown great success on numerous computer vision...
11/25/2021

Global Interaction Modelling in Vision Transformer via Super Tokens

With the popularity of Transformer architectures in computer vision, the...
07/01/2022

DALG: Deep Attentive Local and Global Modeling for Image Retrieval

Deeply learned representations have achieved superior image retrieval pe...
05/08/2022

Transformer Tracking with Cyclic Shifting Window Attention

Transformer architecture has been showing its great strength in visual o...
05/26/2021

Aggregating Nested Transformers

Although hierarchical structures are popular in recent vision transforme...