Adversarial Examples Target Topological Holes in Deep Networks

01/28/2019
by   Thomas Gebhart, et al.
0

It is currently unclear why adversarial examples are easy to construct for deep networks that are otherwise successful with respect to their training domain. However, it is suspected that these adversarial examples lie within some small perturbation from the network's decision boundaries or exist in low-density regions with respect to the training distribution. Using persistent homology, we find that deep networks effectively have "holes" in their activation graphs, making them blind to regions of the input space that can be exploited by adversarial examples. These holes are effectively dense in the input space, making it easy to find a perturbed image that can be misclassified. By studying the topology of network activation, we find global patterns in the form of activation subgraphs which can both reliably determine whether an example is adversarial and can recover the true category of the example well above chance, implying that semantic information about the input is embedded globally via the activation pattern in deep networks.

READ FULL TEXT
research
03/11/2021

Improving Adversarial Robustness via Channel-wise Activation Suppressing

The study of adversarial examples and their activation has attracted sig...
research
07/12/2017

NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles

It has been shown that most machine learning algorithms are susceptible ...
research
06/25/2018

Exploring Adversarial Examples: Patterns of One-Pixel Attacks

Failure cases of black-box deep learning, e.g. adversarial examples, mig...
research
05/29/2018

Lightweight Probabilistic Deep Networks

Even though probabilistic treatments of neural networks have a long hist...
research
08/27/2016

A Boundary Tilting Persepective on the Phenomenon of Adversarial Examples

Deep neural networks have been shown to suffer from a surprising weaknes...
research
04/01/2017

SafetyNet: Detecting and Rejecting Adversarial Examples Robustly

We describe a method to produce a network where current methods such as ...
research
11/01/2018

Excessive Invariance Causes Adversarial Vulnerability

Despite their impressive performance, deep neural networks exhibit strik...

Please sign up or login with your details

Forgot password? Click here to reset