CAT: Cross Attention in Vision Transformer

06/10/2021
by   Hezheng Lin, et al.
0

Since Transformer has found widespread use in NLP, the potential of Transformer in CV has been realized and has inspired many new approaches. However, the computation required for replacing word tokens with image patches for Transformer after the tokenization of the image is vast(e.g., ViT), which bottlenecks model training and inference. In this paper, we propose a new attention mechanism in Transformer termed Cross Attention, which alternates attention inner the image patch instead of the whole image to capture local information and apply attention between image patches which are divided from single-channel feature maps capture global information. Both operations have less computation than standard self-attention in Transformer. By alternately applying attention inner patch and between patches, we implement cross attention to maintain the performance with lower computational cost and build a hierarchical network called Cross Attention Transformer(CAT) for other vision tasks. Our base model achieves state-of-the-arts on ImageNet-1K, and improves the performance of other methods on COCO and ADE20K, illustrating that our network has the potential to serve as general backbones. The code and models are available at <https://github.com/linhezheng19/CAT>.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/27/2021

Transformer in Transformer

Transformer is a type of self-attention-based neural networks originally...
research
12/09/2021

Locally Shifted Attention With Early Global Integration

Recent work has shown the potential of transformers for computer vision ...
research
03/11/2022

Visualizing and Understanding Patch Interactions in Vision Transformer

Vision Transformer (ViT) has become a leading tool in various computer v...
research
05/06/2023

DBAT: Dynamic Backward Attention Transformer for Material Segmentation with Cross-Resolution Patches

The objective of dense material segmentation is to identify the material...
research
04/09/2022

TransGeo: Transformer Is All You Need for Cross-view Image Geo-localization

The dominant CNN-based methods for cross-view image geo-localization rel...
research
03/24/2022

Transformer Compressed Sensing via Global Image Tokens

Convolutional neural networks (CNN) have demonstrated outstanding Compre...
research
01/24/2022

Patches Are All You Need?

Although convolutional networks have been the dominant architecture for ...

Please sign up or login with your details

Forgot password? Click here to reset