Propagated Perturbation of Adversarial Attack for well-known CNNs: Empirical Study and its Explanation

09/19/2019
by   Jihyeun Yoon, et al.
0

Deep Neural Network based classifiers are known to be vulnerable to perturbations of inputs constructed by an adversarial attack to force misclassification. Most studies have focused on how to make vulnerable noise by gradient based attack methods or to defense model from adversarial attack. The use of the denoiser model is one of a well-known solution to reduce the adversarial noise although classification performance had not significantly improved. In this study, we aim to analyze the propagation of adversarial attack as an explainable AI(XAI) point of view. Specifically, we examine the trend of adversarial perturbations through the CNN architectures. To analyze the propagated perturbation, we measured normalized Euclidean Distance and cosine distance in each CNN layer between the feature map of the perturbed image passed through denoiser and the non-perturbed original image. We used five well-known CNN based classifiers and three gradient-based adversarial attacks. From the experimental results, we observed that in most cases, Euclidean Distance explosively increases in the final fully connected layer while cosine distance fluctuated and disappeared at the last layer. This means that the use of denoiser can decrease the amount of noise. However, it failed to defense accuracy degradation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/10/2021

On Procedural Adversarial Noise Attack And Defense

Deep Neural Networks (DNNs) are vulnerable to adversarial examples which...
research
02/02/2022

An Eye for an Eye: Defending against Gradient-based Attacks with Gradients

Deep learning models have been shown to be vulnerable to adversarial att...
research
06/08/2020

Tricking Adversarial Attacks To Fail

Recent adversarial defense approaches have failed. Untargeted gradient-b...
research
06/13/2019

A Computationally Efficient Method for Defending Adversarial Deep Learning Attacks

The reliance on deep learning algorithms has grown significantly in rece...
research
02/08/2019

Understanding the One-Pixel Attack: Propagation Maps and Locality Analysis

Deep neural networks were shown to be vulnerable to single pixel modific...
research
01/26/2020

Ensemble Noise Simulation to Handle Uncertainty about Gradient-based Adversarial Attacks

Gradient-based adversarial attacks on neural networks can be crafted in ...
research
12/03/2019

Universal Adversarial Perturbations for CNN Classifiers in EEG-Based BCIs

Multiple convolutional neural network (CNN) classifiers have been propos...

Please sign up or login with your details

Forgot password? Click here to reset