Linear Context Transform Block

09/06/2019
by   Ruan Dongsheng, et al.
0

Squeeze-and-Excitation (SE) block presents a channel attention mechanism for modeling the global context via explicitly capturing dependencies between channels. However, we still poorly understand for SE block. In this work, we first revisit the SE block and present a detailed empirical study of the relationship between global context and attention distribution, based on which we further propose a simple yet effective module. We call this module Linear Context Transform (LCT) block, which implicitly captures dependencies between channels and linearly transforms the global context of each channel. LCT block is extremely lightweight with negligible parameters and computations. Extensive experiments show that LCT block outperforms SE block in image classification on ImageNet and object detection/segmentation on COCO across many models. Moreover, we also demonstrate that LCT block can yield consistent performance gains for existing state-of-the-art detection architectures. For examples, LCT block brings 1.5∼1.7 independently of the detector strength on COCO benchmark. We hope our work will provide a new insight into the channel attention

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/06/2019

Channel Locality Block: A Variant of Squeeze-and-Excitation

Attention mechanism is a hot spot in deep learning field. Using channel ...
research
07/05/2021

Tiled Squeeze-and-Excite: Channel Attention With Local Spatial Context

In this paper we investigate the amount of spatial context required for ...
research
09/25/2019

Gated Channel Transformation for Visual Recognition

In this work, we propose a generally applicable transformation unit for ...
research
11/14/2022

PKCAM: Previous Knowledge Channel Attention Module

Recently, attention mechanisms have been explored with ConvNets, both ac...
research
09/05/2017

Squeeze-and-Excitation Networks

Convolutional neural networks are built upon the convolution operation, ...
research
12/13/2022

CAT: Learning to Collaborate Channel and Spatial Attention from Multi-Information Fusion

Channel and spatial attention mechanism has proven to provide an evident...
research
10/18/2021

Abnormal Occupancy Grid Map Recognition using Attention Network

The occupancy grid map is a critical component of autonomous positioning...

Please sign up or login with your details

Forgot password? Click here to reset