Interlaced Sparse Self-Attention for Semantic Segmentation

07/29/2019
by   Lang Huang, et al.
2

In this paper, we present a so-called interlaced sparse self-attention approach to improve the efficiency of the self-attention mechanism for semantic segmentation. The main idea is that we factorize the dense affinity matrix as the product of two sparse affinity matrices. There are two successive attention modules each estimating a sparse affinity matrix. The first attention module is used to estimate the affinities within a subset of positions that have long spatial interval distances and the second attention module is used to estimate the affinities within a subset of positions that have short spatial interval distances. These two attention modules are designed so that each position is able to receive the information from all the other positions. In contrast to the original self-attention module, our approach decreases the computation and memory complexity substantially especially when processing high-resolution feature maps. We empirically verify the effectiveness of our approach on six challenging semantic segmentation benchmarks.

READ FULL TEXT

page 7

page 8

page 9

research
01/09/2020

HMANet: Hybrid Multiple Attention Network for Semantic Segmentation in Aerial Images

Semantic segmentation in very high resolution (VHR) aerial images is one...
research
07/31/2019

Expectation-Maximization Attention Networks for Semantic Segmentation

Self-attention mechanism has been widely used for various tasks. It is d...
research
03/26/2020

Fastidious Attention Network for Navel Orange Segmentation

Deep learning achieves excellent performance in many domains, so we not ...
research
09/13/2020

Efficient Folded Attention for 3D Medical Image Reconstruction and Segmentation

Recently, 3D medical image reconstruction (MIR) and segmentation (MIS) b...
research
01/19/2021

CAA : Channelized Axial Attention for Semantic Segmentation

Self-attention and channel attention, modelling the semantic interdepend...
research
09/09/2021

Is Attention Better Than Matrix Decomposition?

As an essential ingredient of modern deep learning, attention mechanism,...
research
07/11/2023

ResMatch: Residual Attention Learning for Local Feature Matching

Attention-based graph neural networks have made great progress in featur...

Please sign up or login with your details

Forgot password? Click here to reset