Coordinate Attention for Efficient Mobile Network Design

03/04/2021
by   Qibin Hou, et al.
0

Recent studies on mobile network design have demonstrated the remarkable effectiveness of channel attention (e.g., the Squeeze-and-Excitation attention) for lifting model performance, but they generally neglect the positional information, which is important for generating spatially selective attention maps. In this paper, we propose a novel attention mechanism for mobile networks by embedding positional information into channel attention, which we call "coordinate attention". Unlike channel attention that transforms a feature tensor to a single feature vector via 2D global pooling, the coordinate attention factorizes channel attention into two 1D feature encoding processes that aggregate features along the two spatial directions, respectively. In this way, long-range dependencies can be captured along one spatial direction and meanwhile precise positional information can be preserved along the other spatial direction. The resulting feature maps are then encoded separately into a pair of direction-aware and position-sensitive attention maps that can be complementarily applied to the input feature map to augment the representations of the objects of interest. Our coordinate attention is simple and can be flexibly plugged into classic mobile networks, such as MobileNetV2, MobileNeXt, and EfficientNet with nearly no computational overhead. Extensive experiments demonstrate that our coordinate attention is not only beneficial to ImageNet classification but more interestingly, behaves better in down-stream tasks, such as object detection and semantic segmentation. Code is available at https://github.com/Andrew-Qibin/CoordAttention.

READ FULL TEXT
research
07/29/2020

Linear Attention Mechanism: An Efficient Attention for Semantic Segmentation

In this paper, to remedy this deficiency, we propose a Linear Attention ...
research
09/13/2020

Attention Cube Network for Image Restoration

Recently, deep convolutional neural network (CNN) have been widely used ...
research
06/23/2021

Vision Permutator: A Permutable MLP-Like Architecture for Visual Recognition

In this paper, we present Vision Permutator, a conceptually simple and d...
research
04/08/2023

MC-MLP:Multiple Coordinate Frames in all-MLP Architecture for Vision

In deep learning, Multi-Layer Perceptrons (MLPs) have once again garnere...
research
12/13/2022

CAT: Learning to Collaborate Channel and Spatial Attention from Multi-Information Fusion

Channel and spatial attention mechanism has proven to provide an evident...
research
04/22/2019

Stochastic Region Pooling: Make Attention More Expressive

Global Average Pooling (GAP) is used by default on the channel-wise atte...
research
10/06/2020

Rotate to Attend: Convolutional Triplet Attention Module

Benefiting from the capability of building inter-dependencies among chan...

Please sign up or login with your details

Forgot password? Click here to reset