Rethinking Query-Key Pairwise Interactions in Vision Transformers

07/01/2022
by   Cheng Li, et al.
0

Vision Transformers have achieved state-of-the-art performance in many visual tasks. Due to the quadratic computational and memory complexities of self-attention, recent works either apply attention only to low-resolution inputs or restrict the receptive field to a small local region. To overcome these limitations, we propose key-only attention, which excludes query-key pairwise interactions and uses a compute-efficient saliency-gate to obtain attention weights, modeling local-global interactions in all stages. Key-only attention has linear computational and memory complexities w.r.t input size. We use alternate layout to hybridize convolution and attention layers instead of grafting which is suggested by previous works, so that all stages can benefit from both spatial attentions and convolutions. We leverage these improvements to develop a new self-attention model family, LinGlos, which reach state-of-the-art accuracies on the parameter-limited setting of ImageNet classification benchmark, and outperform baselines significantly in downstream tasks, e.g., COCO object detection and ADE20K semantic segmentation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/23/2021

Scaling Local Self-Attention For Parameter Efficient Visual Backbones

Self-attention has the promise of improving computer vision systems due ...
research
10/06/2020

Global Self-Attention Networks for Image Recognition

Recently, a series of works in computer vision have shown promising resu...
research
03/01/2021

Coordination Among Neural Modules Through a Shared Global Workspace

Deep learning has seen a movement away from representing examples with a...
research
04/04/2022

MaxViT: Multi-Axis Vision Transformer

Transformers have recently gained significant attention in the computer ...
research
07/28/2022

HorNet: Efficient High-Order Spatial Interactions with Recursive Gated Convolutions

Recent progress in vision Transformers exhibits great success in various...
research
12/04/2018

Factorized Attention: Self-Attention with Linear Complexities

Recent works have been applying self-attention to various fields in comp...
research
02/17/2021

Centroid Transformers: Learning to Abstract with Attention

Self-attention, as the key block of transformers, is a powerful mechanis...

Please sign up or login with your details

Forgot password? Click here to reset