LSAS: Lightweight Sub-attention Strategy for Alleviating Attention Bias Problem

05/09/2023
by   Shanshan Zhong, et al.
0

In computer vision, the performance of deep neural networks (DNNs) is highly related to the feature extraction ability, i.e., the ability to recognize and focus on key pixel regions in an image. However, in this paper, we quantitatively and statistically illustrate that DNNs have a serious attention bias problem on many samples from some popular datasets: (1) Position bias: DNNs fully focus on label-independent regions; (2) Range bias: The focused regions from DNN are not completely contained in the ideal region. Moreover, we find that the existing self-attention modules can alleviate these biases to a certain extent, but the biases are still non-negligible. To further mitigate them, we propose a lightweight sub-attention strategy (LSAS), which utilizes high-order sub-attention modules to improve the original self-attention modules. The effectiveness of LSAS is demonstrated by extensive experiments on widely-used benchmark datasets and popular attention networks. We release our code to help other researchers to reproduce the results of LSAS [https://github.com/Qrange-group/LSAS].

READ FULL TEXT

page 1

page 2

research
08/22/2022

Mix-Pooling Strategy for Attention Mechanism

Recently many effective self-attention modules are proposed to boot the ...
research
10/27/2022

Deepening Neural Networks Implicitly and Locally via Recurrent Attention Strategy

More and more empirical and theoretical evidence shows that deepening ne...
research
12/23/2021

Assessing the Impact of Attention and Self-Attention Mechanisms on the Classification of Skin Lesions

Attention mechanisms have raised significant interest in the research co...
research
10/04/2022

Accurate Image Restoration with Attention Retractable Transformer

Recently, Transformer-based image restoration networks have achieved pro...
research
10/22/2022

Accumulated Trivial Attention Matters in Vision Transformers on Small Datasets

Vision Transformers has demonstrated competitive performance on computer...
research
10/09/2020

How Can Self-Attention Networks Recognize Dyck-n Languages?

We focus on the recognition of Dyck-n (𝒟_n) languages with self-attentio...
research
04/28/2021

[Re] Don't Judge an Object by Its Context: Learning to Overcome Contextual Bias

Singh et al. (2020) point out the dangers of contextual bias in visual r...

Please sign up or login with your details

Forgot password? Click here to reset