Impact of Attention on Adversarial Robustness of Image Classification Models

09/02/2021
by   Prachi Agrawal, et al.
0

Adversarial attacks against deep learning models have gained significant attention and recent works have proposed explanations for the existence of adversarial examples and techniques to defend the models against these attacks. Attention in computer vision has been used to incorporate focused learning of important features and has led to improved accuracy. Recently, models with attention mechanisms have been proposed to enhance adversarial robustness. Following this context, this work aims at a general understanding of the impact of attention on adversarial robustness. This work presents a comparative study of adversarial robustness of non-attention and attention based image classification models trained on CIFAR-10, CIFAR-100 and Fashion MNIST datasets under the popular white box and black box attacks. The experimental results show that the robustness of attention based models may be dependent on the datasets used i.e. the number of classes involved in the classification. In contrast to the datasets with less number of classes, attention based models are observed to show better robustness towards classification.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/07/2019

Improved Adversarial Robustness by Reducing Open Space Risk via Tent Activations

Adversarial examples contain small perturbations that can remain imperce...
research
10/31/2022

Scoring Black-Box Models for Adversarial Robustness

Deep neural networks are susceptible to adversarial inputs and various m...
research
07/12/2022

Adversarial Robustness Assessment of NeuroEvolution Approaches

NeuroEvolution automates the generation of Artificial Neural Networks th...
research
08/21/2023

Measuring the Effect of Causal Disentanglement on the Adversarial Robustness of Neural Network Models

Causal Neural Network models have shown high levels of robustness to adv...
research
03/18/2021

Enhancing Transformer for Video Understanding Using Gated Multi-Level Attention and Temporal Adversarial Training

The introduction of Transformer model has led to tremendous advancements...
research
06/07/2021

Reveal of Vision Transformers Robustness against Adversarial Attacks

Attention-based networks have achieved state-of-the-art performance in m...
research
05/30/2019

Identifying Classes Susceptible to Adversarial Attacks

Despite numerous attempts to defend deep learning based image classifier...

Please sign up or login with your details

Forgot password? Click here to reset