Locally Shifted Attention With Early Global Integration

12/09/2021
by   Shelly Sheynin, et al.
9

Recent work has shown the potential of transformers for computer vision applications. An image is first partitioned into patches, which are then used as input tokens for the attention mechanism. Due to the expensive quadratic cost of the attention mechanism, either a large patch size is used, resulting in coarse-grained global interactions, or alternatively, attention is applied only on a local region of the image, at the expense of long-range interactions. In this work, we propose an approach that allows for both coarse global interactions and fine-grained local interactions already at early layers of a vision transformer. At the core of our method is the application of local and global attention layers. In the local attention layer, we apply attention to each patch and its local shifts, resulting in virtually located local patches, which are not bound to a single, specific location. These virtually located patches are then used in a global attention layer. The separation of the attention layer into local and global counterparts allows for a low computational cost in the number of patches, while still supporting data-dependent localization already at the first layer, as opposed to the static positioning in other visual transformers. Our method is shown to be superior to both convolutional and transformer-based methods for image classification on CIFAR10, CIFAR100, and ImageNet. Code is available at: https://github.com/shellysheynin/Locally-SAG-Transformer.

READ FULL TEXT
research
06/10/2021

CAT: Cross Attention in Vision Transformer

Since Transformer has found widespread use in NLP, the potential of Tran...
research
06/05/2021

Patch Slimming for Efficient Vision Transformers

This paper studies the efficiency problem for visual transformers by exc...
research
03/24/2022

Transformer Compressed Sensing via Global Image Tokens

Convolutional neural networks (CNN) have demonstrated outstanding Compre...
research
10/14/2022

Vision Transformer Visualization: What Neurons Tell and How Neurons Behave?

Recently vision transformers (ViT) have been applied successfully for va...
research
09/20/2022

Graph Reasoning Transformer for Image Parsing

Capturing the long-range dependencies has empirically proven to be effec...
research
01/29/2023

Graph Mixer Networks

In recent years, the attention mechanism has demonstrated superior perfo...
research
07/16/2020

Hopfield Networks is All You Need

We show that the transformer attention mechanism is the update rule of a...

Please sign up or login with your details

Forgot password? Click here to reset