Spatial Bias for Attention-free Non-local Neural Networks

02/24/2023
by   Junhyung Go, et al.
0

In this paper, we introduce the spatial bias to learn global knowledge without self-attention in convolutional neural networks. Owing to the limited receptive field, conventional convolutional neural networks suffer from learning long-range dependencies. Non-local neural networks have struggled to learn global knowledge, but unavoidably have too heavy a network design due to the self-attention operation. Therefore, we propose a fast and lightweight spatial bias that efficiently encodes global knowledge without self-attention on convolutional neural networks. Spatial bias is stacked on the feature map and convolved together to adjust the spatial structure of the convolutional features. Therefore, we learn the global knowledge on the convolution layer directly with very few additional resources. Our method is very fast and lightweight due to the attention-free non-local method while improving the performance of neural networks considerably. Compared to non-local neural networks, the spatial bias use about 10 times fewer parameters while achieving comparable performance with 1.6   3.3 times more throughput on a very little budget. Furthermore, the spatial bias can be used with conventional non-local neural networks to further improve the performance of the backbone model. We show that the spatial bias achieves competitive performance that improves the classification accuracy by +0.79 datasets. Additionally, we validate our method on the MS-COCO and ADE20K datasets for downstream tasks involving object detection and semantic segmentation.

READ FULL TEXT
research
07/15/2022

Lightweight Vision Transformer with Cross Feature Attention

Recent advances in vision transformers (ViTs) have achieved great perfor...
research
07/17/2020

Region-based Non-local Operation for Video Classification

Convolutional Neural Networks (CNNs) model long-range dependencies by de...
research
11/21/2018

Contextualized Non-local Neural Networks for Sequence Learning

Recently, a large number of neural mechanisms and models have been propo...
research
07/06/2021

Poly-NL: Linear Complexity Non-local Layers with Polynomials

Spatial self-attention layers, in the form of Non-Local blocks, introduc...
research
08/12/2020

Representative Graph Neural Network

Non-local operation is widely explored to model the long-range dependenc...
research
06/12/2021

Structure-Regularized Attention for Deformable Object Representation

Capturing contextual dependencies has proven useful to improve the repre...
research
06/04/2021

X-volution: On the unification of convolution and self-attention

Convolution and self-attention are acting as two fundamental building bl...

Please sign up or login with your details

Forgot password? Click here to reset