NAM: Normalization-based Attention Module

11/24/2021
by   Yichao Liu, et al.
0

Recognizing less salient features is the key for model compression. However, it has not been investigated in the revolutionary attention mechanisms. In this work, we propose a novel normalization-based attention module (NAM), which suppresses less salient weights. It applies a weight sparsity penalty to the attention modules, thus, making them more computational efficient while retaining similar performance. A comparison with three other attention mechanisms on both Resnet and Mobilenet indicates that our method results in higher accuracy. Code for this paper can be publicly accessed at https://github.com/Christian-lyc/NAM.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/14/2022

PKCAM: Previous Knowledge Channel Attention Module

Recently, attention mechanisms have been explored with ConvNets, both ac...
research
10/06/2020

Rotate to Attend: Convolutional Triplet Attention Module

Benefiting from the capability of building inter-dependencies among chan...
research
06/07/2023

Normalization Layers Are All That Sharpness-Aware Minimization Needs

Sharpness-aware minimization (SAM) was proposed to reduce sharpness of m...
research
10/14/2022

Parameter-Free Average Attention Improves Convolutional Neural Network Performance (Almost) Free of Charge

Visual perception is driven by the focus on relevant aspects in the surr...
research
04/20/2022

Attention in Attention: Modeling Context Correlation for Efficient Video Classification

Attention mechanisms have significantly boosted the performance of video...
research
03/03/2023

Benchmarking White Blood Cell Classification Under Domain Shift

Recognizing the types of white blood cells (WBCs) in microscopic images ...

Please sign up or login with your details

Forgot password? Click here to reset