Simple parameter-free self-attention approximation

07/22/2023
by   Yuwen Zhai, et al.
0

The hybrid model of self-attention and convolution is one of the methods to lighten ViT. The quadratic computational complexity of self-attention with respect to token length limits the efficiency of ViT on edge devices. We propose a self-attention approximation without training parameters, called SPSA, which captures global spatial features with linear complexity. To verify the effectiveness of SPSA combined with convolution, we conduct extensive experiments on image classification and object detection tasks.

READ FULL TEXT
research
11/28/2022

FsaNet: Frequency Self-attention for Semantic Segmentation

Considering the spectral properties of images, we propose a new self-att...
research
04/28/2020

Exploring Self-attention for Image Recognition

Recent work has shown that self-attention can serve as a basic building ...
research
04/10/2022

Linear Complexity Randomized Self-attention Mechanism

Recently, random feature attentions (RFAs) are proposed to approximate t...
research
11/18/2019

Affine Self Convolution

Attention mechanisms, and most prominently self-attention, are a powerfu...
research
03/22/2020

SAC: Accelerating and Structuring Self-Attention via Sparse Adaptive Connection

While the self-attention mechanism has been widely used in a wide variet...
research
06/04/2021

X-volution: On the unification of convolution and self-attention

Convolution and self-attention are acting as two fundamental building bl...
research
05/28/2021

An Attention Free Transformer

We introduce Attention Free Transformer (AFT), an efficient variant of T...

Please sign up or login with your details

Forgot password? Click here to reset