Examining the Proximity of Adversarial Examples to Class Manifolds in Deep Networks

04/12/2022
by   Štefan Pócoš, et al.
0

Deep neural networks achieve remarkable performance in multiple fields. However, after proper training they suffer from an inherent vulnerability against adversarial examples (AEs). In this work we shed light on inner representations of the AEs by analysing their activations on the hidden layers. We test various types of AEs, each crafted using a specific norm constraint, which affects their visual appearance and eventually their behavior in the trained networks. Our results in image classification tasks (MNIST and CIFAR-10) reveal qualitative differences between the individual types of AEs, when comparing their proximity to the class-specific manifolds on the inner representations. We propose two methods that can be used to compare the distances to class-specific manifolds, regardless of the changing dimensions throughout the network. Using these methods, we consistently confirm that some of the adversarials do not necessarily leave the proximity of the manifold of the correct class, not even in the last hidden layer of the neural network. Next, using UMAP visualisation technique, we project the class activations to 2D space. The results indicate that the activations of the individual AEs are entangled with the activations of the test set. This, however, does not hold for a group of crafted inputs called the rubbish class. We also confirm the entanglement of adversarials with the test set numerically using the soft nearest neighbour loss.

READ FULL TEXT

page 4

page 5

page 6

page 7

research
09/25/2019

Regularising Deep Networks with DGMs

Here we develop a new method for regularising neural networks where we l...
research
11/01/2018

Excessive Invariance Causes Adversarial Vulnerability

Despite their impressive performance, deep neural networks exhibit strik...
research
05/22/2017

Regularizing deep networks using efficient layerwise adversarial training

Adversarial training has been shown to regularize deep neural networks i...
research
06/17/2020

Adversarial Examples Detection and Analysis with Layer-wise Autoencoders

We present a mechanism for detecting adversarial examples based on data ...
research
07/05/2019

Prior Activation Distribution (PAD): A Versatile Representation to Utilize DNN Hidden Units

In this paper, we introduce the concept of Prior Activation Distribution...
research
05/22/2023

Latent Magic: An Investigation into Adversarial Examples Crafted in the Semantic Latent Space

Adversarial attacks against Deep Neural Networks(DNN) have been a crutia...
research
10/10/2019

Coloring the Black Box: Visualizing neural network behavior with a self-introspective model

The following work presents how autoencoding all the possible hidden act...

Please sign up or login with your details

Forgot password? Click here to reset