Assessing the Impact of Attention and Self-Attention Mechanisms on the Classification of Skin Lesions

12/23/2021
by   Rafael Pedro, et al.
13

Attention mechanisms have raised significant interest in the research community, since they promise significant improvements in the performance of neural network architectures. However, in any specific problem, we still lack a principled way to choose specific mechanisms and hyper-parameters that lead to guaranteed improvements. More recently, self-attention has been proposed and widely used in transformer-like architectures, leading to significant breakthroughs in some applications. In this work we focus on two forms of attention mechanisms: attention modules and self-attention. Attention modules are used to reweight the features of each layer input tensor. Different modules have different ways to perform this reweighting in fully connected or convolutional layers. The attention models studied are completely modular and in this work they will be used with the popular ResNet architecture. Self-Attention, originally proposed in the area of Natural Language Processing makes it possible to relate all the items in an input sequence. Self-Attention is becoming increasingly popular in Computer Vision, where it is sometimes combined with convolutional layers, although some recent architectures do away entirely with convolutions. In this work, we study and perform an objective comparison of a number of different attention mechanisms in a specific computer vision task, the classification of samples in the widely used Skin Cancer MNIST dataset. The results show that attention modules do sometimes improve the performance of convolutional neural network architectures, but also that this improvement, although noticeable and statistically significant, is not consistent in different settings. The results obtained with self-attention mechanisms, on the other hand, show consistent and significant improvements, leading to the best results even in architectures with a reduced number of parameters.

READ FULL TEXT

page 1

page 3

page 7

research
06/03/2022

EAANet: Efficient Attention Augmented Convolutional Networks

Humans can effectively find salient regions in complex scenes. Self-atte...
research
04/22/2019

Attention Augmented Convolutional Networks

Convolutional networks have been the paradigm of choice in many computer...
research
05/09/2023

LSAS: Lightweight Sub-attention Strategy for Alleviating Attention Bias Problem

In computer vision, the performance of deep neural networks (DNNs) is hi...
research
04/12/2021

GAttANet: Global attention agreement for convolutional neural networks

Transformer attention architectures, similar to those developed for natu...
research
11/08/2019

On the Relationship between Self-Attention and Convolutional Layers

Recent trends of incorporating attention mechanisms in vision have led r...
research
04/11/2019

An Empirical Study of Spatial Attention Mechanisms in Deep Networks

Attention mechanisms have become a popular component in deep neural netw...

Please sign up or login with your details

Forgot password? Click here to reset