Parameter-Free Average Attention Improves Convolutional Neural Network Performance (Almost) Free of Charge

10/14/2022
by   Nils Körber, et al.
0

Visual perception is driven by the focus on relevant aspects in the surrounding world. To transfer this observation to the digital information processing of computers, attention mechanisms have been introduced to highlight salient image regions. Here, we introduce a parameter-free attention mechanism called PfAAM, that is a simple yet effective module. It can be plugged into various convolutional neural network architectures with a little computational overhead and without affecting model size. PfAAM was tested on multiple architectures for classification and segmentic segmentation leading to improved model performance for all tested cases. This demonstrates its wide applicability as a general easy-to-use module for computer vision tasks. The implementation of PfAAM can be found on https://github.com/nkoerb/pfaam.

READ FULL TEXT
research
12/12/2016

Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer

Attention plays a critical role in human visual experience. Furthermore,...
research
04/06/2023

RFAConv: Innovating Spatital Attention and Standard Convolutional Operation

Spatial attention has been demonstrated to enable convolutional neural n...
research
04/28/2021

Twins: Revisiting Spatial Attention Design in Vision Transformers

Very recently, a variety of vision transformer architectures for dense p...
research
11/10/2021

Learning to ignore: rethinking attention in CNNs

Recently, there has been an increasing interest in applying attention me...
research
11/24/2021

NAM: Normalization-based Attention Module

Recognizing less salient features is the key for model compression. Howe...
research
06/20/2019

Human vs Machine Attention in Neural Networks: A Comparative Study

Recent years have witnessed a surge in the popularity of attention mecha...
research
07/02/2020

ReXNet: Diminishing Representational Bottleneck on Convolutional Neural Network

This paper addresses representational bottleneck in a network and propos...

Please sign up or login with your details

Forgot password? Click here to reset