Less Memory, Faster Speed: Refining Self-Attention Module for Image Reconstruction

05/20/2019
by   Zheng Wang, et al.
0

Self-attention (SA) mechanisms can capture effectively global dependencies in deep neural networks, and have been applied to natural language processing and image processing successfully. However, SA modules for image reconstruction have high time and space complexity, which restrict their applications to higher-resolution images. In this paper, we refine the SA module in self-attention generative adversarial networks (SAGAN) via adapting a non-local operation, revising the connectivity among the units in SA module and re-implementing its computational pattern, such that its time and space complexity is reduced from O(n^2) to O(n), but it is still equivalent to the original SA module. Further, we explore the principles behind the module and discover that our module is a special kind of channel attention mechanisms. Experimental results based on two benchmark datasets of image reconstruction, verify that under the same computational environment, two models can achieve comparable effectiveness for image reconstruction, but the proposed one runs faster and takes up less memory space.

READ FULL TEXT
research
09/13/2022

Switchable Self-attention Module

Attention mechanism has gained great success in vision recognition. Many...
research
10/06/2020

Global Self-Attention Networks for Image Recognition

Recently, a series of works in computer vision have shown promising resu...
research
11/22/2022

Complex-Valued Time-Frequency Self-Attention for Speech Dereverberation

Several speech processing systems have demonstrated considerable perform...
research
09/11/2022

On The Computational Complexity of Self-Attention

Transformer architectures have led to remarkable progress in many state-...
research
04/12/2023

An End-to-End Network for Upright Adjustment of Panoramic Images

Nowadays, panoramic images can be easily obtained by panoramic cameras. ...
research
06/04/2021

X-volution: On the unification of convolution and self-attention

Convolution and self-attention are acting as two fundamental building bl...
research
08/27/2022

Uniformly Sampled Polar and Cylindrical Grid Approach for 2D, 3D Image Reconstruction using Algebraic Algorithm

Image reconstruction by Algebraic Methods (AM) outperforms the transform...

Please sign up or login with your details

Forgot password? Click here to reset