Deep neural network loses attention to adversarial images

06/10/2021
by   Shashank Kotyan, et al.
36

Adversarial algorithms have shown to be effective against neural networks for a variety of tasks. Some adversarial algorithms perturb all the pixels in the image minimally for the image classification task in image classification. In contrast, some algorithms perturb few pixels strongly. However, very little information is available regarding why these adversarial samples so diverse from each other exist. Recently, Vargas et al. showed that the existence of these adversarial samples might be due to conflicting saliency within the neural network. We test this hypothesis of conflicting saliency by analysing the Saliency Maps (SM) and Gradient-weighted Class Activation Maps (Grad-CAM) of original and few different types of adversarial samples. We also analyse how different adversarial samples distort the attention of the neural network compared to original samples. We show that in the case of Pixel Attack, perturbed pixels either calls the network attention to themselves or divert the attention from them. Simultaneously, the Projected Gradient Descent Attack perturbs pixels so that intermediate layers inside the neural network lose attention for the correct class. We also show that both attacks affect the saliency map and activation maps differently. Thus, shedding light on why some defences successful against some attacks remain vulnerable against other attacks. We hope that this analysis will improve understanding of the existence and the effect of adversarial samples and enable the community to develop more robust neural networks.

READ FULL TEXT

page 6

page 8

page 9

page 10

page 11

page 12

page 13

page 14

research
02/03/2020

Robust saliency maps with decoy-enhanced saliency score

Saliency methods help to make deep neural network predictions more inter...
research
02/08/2019

Understanding the One-Pixel Attack: Propagation Maps and Locality Analysis

Deep neural networks were shown to be vulnerable to single pixel modific...
research
08/23/2018

Maximal Jacobian-based Saliency Map Attack

The Jacobian-based Saliency Map Attack is a family of adversarial attack...
research
12/20/2013

Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps

This paper addresses the visualisation of image classification models, l...
research
11/20/2019

Analysis of Deep Networks for Monocular Depth Estimation Through Adversarial Attacks with Proposal of a Defense Method

In this paper, we consider adversarial attacks against a system of monoc...
research
12/07/2019

Principal Component Properties of Adversarial Samples

Deep Neural Networks for image classification have been found to be vuln...
research
11/21/2020

Robust Watermarking Using Inverse Gradient Attention

Watermarking is the procedure of encoding desired information into an im...

Please sign up or login with your details

Forgot password? Click here to reset